Nothing Special   »   [go: up one dir, main page]

WO2017020045A1 - System and methods for malarial retinopathy screening - Google Patents

System and methods for malarial retinopathy screening Download PDF

Info

Publication number
WO2017020045A1
WO2017020045A1 PCT/US2016/045051 US2016045051W WO2017020045A1 WO 2017020045 A1 WO2017020045 A1 WO 2017020045A1 US 2016045051 W US2016045051 W US 2016045051W WO 2017020045 A1 WO2017020045 A1 WO 2017020045A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
retinal
malarial
retinopathy
images
Prior art date
Application number
PCT/US2016/045051
Other languages
French (fr)
Inventor
Vinayak Joshi
Peter Soliz
Simon E. BARRIGA
Gilberto Zamora
Carla P. AGURTO RIOS
Original Assignee
VisionQuest Biomedical LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VisionQuest Biomedical LLC filed Critical VisionQuest Biomedical LLC
Publication of WO2017020045A1 publication Critical patent/WO2017020045A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A50/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE in human health protection, e.g. against extreme weather
    • Y02A50/30Against vector-borne diseases, e.g. mosquito-borne, fly-borne, tick-borne or waterborne diseases whose impact is exacerbated by climate change

Definitions

  • Embodiments of the present invention teach a method to process a retinal image in an automatic manner for detecting retinal lesions associated with Malarial retinopathy (MR).
  • MR Malarial retinopathy
  • the retinal camera comprises any imaging device that produces color images of the retina.
  • the camera comprises computer software to assess quality of an input color image and provides feedback to a photographer if the image quality is lower than a predefined reference.
  • One embodiment of the present invention is the computer method integrated to a low-cost, portable retinal camera, such as but not limited to, VisionQuest’s iRx-Cam.
  • th omputer method was evaluated on a representative set of 86 retinal color images from CM subjects. The MR was detected with sensitivity of 95% and specificity of 100%.
  • INTRODUCTION [0004] Note that the following discussion may refer to a number of publications and references. Discussion of such publications herein is given for more complete background of the scientific principles and is not to be construed as an admission that such publications are prior art for patentability determination purposes.
  • Cerebral malaria (CM) is a life-threatening clinical syndrome associated with malarial infection.
  • CM results in a loss of $12 billion USD in gross domestic product (GDP) and 35.4 million disability adjusted life years due to mortality and morbidity of CM patients in Africa.
  • CM is caused by the Plasmodium falciparum (PF) parasite that sequesters the erythrocytes in micro-vessels of cerebral and retinal circulation, causing the appearance of retinal abnormalities of MR, visible in color retinal images.
  • PF Plasmodium falciparum
  • the histological features of MR present as retinal whitening and vessel discoloration which are unique to MR, are related to ischemia, and are a reflection of processes occurring in the brain.
  • the extent of retinal whitening and number of hemorrhages are related to the duration of coma and likelihood of death from CM.
  • screening for MR improves the specificity of CM diagnosis from 61% (using the World Health Organization: WHO criteria) to 100% (WHO criteria + MR). Studies have reported that nce of MR has 95% sensitivity and 100% specificity in patients with severe CM cases that were autopsied.
  • blood sample analysis for detection of PF parasite and exclusion of other encephalopathies causing coma (e.g. meningitis, post-ictal state, hypoglycemia).
  • Beare et al. (2006) reported that the specificity of the WHO criteria is only 61%, while Taylor et al.
  • PPV positive predictive value
  • CM comatose children admitted in a hospital
  • 124 to 210 will be misdiagnosed with CM.
  • the number of misdiagnosed CM children can be reduced to as little as 6 per 1000.
  • parenteral anti- malarial treatment artesunate or quinine
  • An important point to make is that the clinical diagnosis of CM is highly sensitive (>95%), and that an application of the present invention is to reduce the false positive rate and to increase the positive predictive value after a diagnosis for CM has already b made through clinical symptoms.
  • the present invention s sensitivity to MR diagnosis is not an issue as both MR-positive (that indicates true CM) and MR-negative (that indicates non-CM disease) patients get treated for CM.
  • the improved specificity to MR diagnosis plays a vital role in accurate identification of non-CM cases (MR-negative) which can then be investigated for other causes of coma. Therefore, treatment strategies for other non-malarial causes of coma such as pneumonia or bacterial meningitis can be applied, thus saving hundreds of thousands of children’s lives.
  • an ophthalmologist performs indirect ophthalmoscopy. At present, the specialized equipment and human skill required for indirect ophthalmoscopy remain barriers to wider use of MR detection in clinical practice.
  • Embodiments of the present invention comprise computer implemented methods that automatically analyze the retinal color images to detect MR lesions associated with cerebral malaria, which can be integrated into a low-cost and portable image acquisition apparatus as part of an MR screening system.
  • a further embodiment of the present invention provides automatically generated cues to the user when an image is unusable because of low quality comprising of non-uniform illumination, low contrast, and noise. These tasks are preferably performed in real time, while the patient is still being imaged, by a computing unit that executes computer implemented algorithms and is integrated with the image acquisition system, e.g. a retinal camera.
  • the retinal camera of one embodiment of the present invention is designed to meet the clinical environment for imaging the CM affected population in Africa, including being portable and low- cost, such as but not limited to, VisionQuest’s i-RxCam.
  • the camera is preferably equipped with auto-image-quality feedback software that alerts a photographer on quality of captured images and if there is a need of re-imaging.
  • an embodiment of the present invention as described herein preferably processes the retinal images using a computing processor that produces MR detection results without a need of human intervention.
  • the device design reduces the need for high levels of training or skill and can be operated b medical technician or a nurse.
  • An embodiment of the present invention comprises the first fully automated software for comprehensive analysis of MR abnormalities and their statistically optimal combination to detect presence/absence of MR with high accuracy. This also eliminates the challenges to train healthcare staff to perform ophthalmoscopy and interpret retinal images. The method is tuned to yield high specificity, addressing the current clinical requirement to prevent deaths resulting from over-diagnosis of CM.
  • An embodiment of the present invention is designed to address the current clinical needs in Africa: 1) low-cost, portable retinal imaging device (retinal camera) affordable and accessible to the targeted population, 2) retinal camera designed with a wide field of view lens for detecting unique MR lesions, 3) fully automated software based MR detection using a portable processing unit, 4) Easily adaptable in clinical settings in Africa for the target users such as medical technicians, nurses, and doctors working at a clinical facility for CM diagnosis. Additionally, an embodiment of the present invention is an important tool to clinical investigators, epidemiologists, and policy makers by making it possible to track the incidence of "true CM" for directing the malaria control programs to make the most economic impact of improving the healthcare in Africa.
  • Zhao et al. teach a method for detection of vascular leakage in fluorescein angiogram images, which corresponds to MR whitening. The authors used graph methods and saliency maps to detect the leakage location.
  • Zhao et al. teaches an automated thod for detection MR vascular abnormalities such as intravascular filling defect, using fluorescein angiogram images.
  • An aspect of one embodiment of the present invention comprises an automated method for MR detection; a software for detection of MR lesions and an MR detection model that classifies each patient-case into MR or no-MR categories.
  • the method is preferably integrated with low-cost, easy to use and portable retinal imaging camera.
  • the users of a preferred embodiment of the present invention comprise medical technicians, nurses, and doctors working at a clinical facility providing diagnosis and treatment of patients with symptoms of CM.
  • the use of an embodiment of the present invention improves the positive predictive power for detection of CM and thus alert caregivers to look for other conditions that cause coma when MR is not present, thus significantly reducing the current rate of CM misdiagnosis. BRIEF DESCRIPTION OF INVENTION.
  • One embodiment provides for a method to perform automatic malarial retinopathy detection wherein a retina is illuminated using an illumination source and capturing an image of the retina with a retinal camera.
  • the retinal image is transmitted to a processor where the processor performs an assessment in real time of the image quality wherein image quality is determined by one or more of the image quality analysis steps of: 1) determining alignment of the image; 2) determining presence and extent of crescents and shadows in the image; and 3) determining quantitative image quality of the image via a classification process trained using examples of visual perception by human experts. If the image quality does not meet a predetermined quality requirement, the camera is adjusted.
  • the adjusting step may employ a user interface to indicate a user quality of the image and suggested actions to take with ct to the camera.
  • the performing an image quality assessment step may classify the image according to a set of image quality labels.
  • An image quality descriptive label may be assigned to an image.
  • An image that meets the predetermined image quality requirement is further processed by the processor for detection in real time of Malarial retinopathy in the retinal image wherein Malarial retinopathy is determined by one or more of the detection of Malarial retinopathy steps of: 1) determining the presence of retinal whitening in the image; 2) determining the presence of hemorrhages in the image; 3) determining the presence of vessel discoloration in the image; and 4) determining quantitative likelihood of presence of Malarial retinopathy in the image via a classification process trained using examples of visual perception of malarial retinopathy by human experts.
  • the performing a Malarial retinopathy detection step may transform the image information into a plurality of color spaces, color, texture, statistical features or any combination thereof.
  • the performing step may group a plurality of features into a plurality of feature vectors. For example, the performing step determines presence and t t of retinal whitening in the image according to one or more clinical protocols. Further, the performing step may determine the presence and extent of hemorrhages in the image according to one or more clinical protocols.
  • the performing step uses a retinal camera with optical contact lens with wide field of view for example greater than 120-degrees but not limited thereto for in some embodiments a field of view of between 50-100 degree and/or 100-120 degree wide field of view can be used.
  • the executing step may employ a reduction of camera reflex in the image using color and textural features to distinguish between the reflex and true whitening and/or the executing step may additionally employ color, textural, morphological, and statistical feature extraction steps to form the feature vectors.
  • the executing step may additionally comprise assigning a descriptive label to the image as to presence and extent of retinal whitening according to one or more clinical protocols. For example, the executing step may employ a threshold to assign labels to the image. Further, the executing step may additionally comprise tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
  • the executing step additionally employs color, difference of Gaussians, contrast, and morphological feature extraction phase to form the feature vectors.
  • the executing step additionally employs color, difference of Gaussians, contrast, and morphological feature extraction phase to form the feature vectors and/or additionally comprises assigning a descriptive label to the image as to nce and extent of hemorrhages according to one or more clinical protocols.
  • the executing step may employ a threshold to assign labels to the image.
  • the executing step may additionally comprise tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
  • a descriptive label may be assigned to the image as to presence and extent of hemorrhages according to one or more clinical protocols.
  • the executing step employs a threshold to assign labels to the image.
  • the executing step additionally comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
  • the executing step additionally employs color, intensity gradient, and statistical feature extraction phase to form the feature t rs. Further still, the executing step additionally employs a feature reduction phase to reduce number of features.
  • a descriptive label is assigned to the image as to presence and extent of vessel discoloration according to one or more clinical protocols.
  • the executing step employs a threshold to assign labels to the image.
  • the executing step comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
  • the step of determining quantitative likelihood of presence of Malarial retinopathy in the image via a classification process trained using examples of visual perception of malarial retinopathy by human experts is in real time. Thereafter, assigning a label to the image indicative of the presence or absence of Malarial retinopathy according to the likelihood value of the output.
  • the executing step comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
  • determining quantitative likelihood of presence of Malarial retinopathy in the retinal image via a classification process trained using examples of visual perception of Malarial retinopathy by human experts further comprises a high specificity classification model and/or a high sensitivity classification model based upon quantitative result. Further still a label is assigned to the image indicative of the presence or absence of Malarial retinopathy according to the likelihood value of the output.
  • a system to perform automatic retinal screening, said system comprising an illumination source illuminating a retina; a retinal camera capturing a retinal image; a processor that receives the image and performs the following steps on the received image: performing an assessment in real time of the image quality, adjusting a setting of the camera if the i e quality does not meet predetermined quality requirements, and performing an assessment in real time of the presence or absence of Malarial retinopathy.
  • Another embodiment provides a method to automatically determine presence or absence of malarial retinopathy in a retinal image, the method comprising the steps of illuminating a retina using an illumination source.
  • the retinal image is captured with a retinal camera.
  • the image is transmitted to a processor and via the processor an assessment in real time of the image quality is performed by assessing in real time of the presence or absence of Malarial retinopathy with one or more of the following steps of detecting the presence or absence of retinal whitening; detecting the presence or absence of hemorrhages; detecting the presence or absence of vessel discoloration; and detecting the presence or absence of malarial retinopathy via a classification process trained using example images containing retinal lesions of malarial retinopathy.
  • One embodiment of the present invention comprises a computer method to conduct real time analysis of the likelihood of presence or absence of MR in the retina of the image. Further a label is assigned to the image or set of images indicative of the likelihood.
  • An assessment of images in real time is conducted via a computer processor using three methods for the detection of retinal lesions associated with malarial retinopathy: retinal whitening, white-centered hemorrhages, and vessel discoloration.
  • the performing step determines if retinal lesion such as retinal whitening is present, according to one or more clinical protocols.
  • a descriptive label is assigned to the image whether or not it shows retinal whitening, according to one or more clinical protocols.
  • the performing step determines presence and extent of hemorrhages and/or vessel discoloration in the image using image analysis and feature based classification of images. Further a descriptive label is assigned to the image as to presence and extent of hemorrhages and/or vessel discoloration.
  • the performing step may additionally employ a feature extraction step to group color, intensity, morphology, texture, and other distinguishing feature components into feature vectors.
  • the executing step may additionally employ a feature reduction step to reduce the number of features.
  • the executing step additionally comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity. For example, the executing step employs a threshold to assign labels to the image or set of images.
  • the performing step comprises a high sensitivity classification model.
  • the performing step comprises a high specificity classification model.
  • the performing step determines a quantitative index describing the probability of presence of malarial retinopathy in the image via a classification process trained using examples of images labeled by human experts for presence or absence of malarial retinopathy. Further a descriptive label is assigned to the image for presence or absence of malarial retinopathy (binary classification).
  • the performing step classifies the image according to a set of labels describing presence or absence of malarial retinopathy.
  • a system to perform automatic retinal screening for malarial retinopathy having an illumination source illuminating a retina; a retinal camera capturing a retinal image; a processor receiving the image, performing an assessment in real time of the quality of image, performing an assessment in real time of the presence or absence of malarial retinopathy, and taking further investigative or treatment actions according to the results presented by the system.
  • a method to automatically determine presence or absence of malarial retinopathy in a retinal image is provided.
  • the method comprising the steps of illuminating a retina using an illumination source; capturing the retinal image with a retinal camera; transmitting the image to a processor; performing via the processor an assessment in real time of the quality of image, performing via the processor an assessment in real time of the presence or absence of malarial retinopathy, comprising determining presence of retinal whitening lesion according to one or more clinical protocols; determining presence and extent of hemorrhages and/or vessel discoloration in the image; and determining quantitative index describing a probability of presence of malarial retinopathy in the image via a classification process trained using examples of images labeled by human experts for presence or absence of malarial retinopathy; and determining investigative diagnosis or treatment strategies according to the results presented.
  • the adjusting step may employ a user interface to indicate to a user the presence or absence of malarial retinopathy in the image and suggested actions to take with respect to the investigative diagnosis or determination of a treatment strategy.
  • One embodiment of the present invention comprises a retinal camera that can be used for imaging comatose children.
  • the retinal camera may be modified to suit the clinical requirements, with components comprising, an optical contact lens with wide field of view (>120 degrees) and a processor that forms an interface between algorithms system and retinal camera.
  • One or more embodiments of the present invention overcome barriers to adoption of ti l imaging at the primary care and hospital settings by the combination of a low-cost, portable, easy to use retinal camera and software-based methods to detect the presence of malarial retinopathy.
  • determination of input retinal image quality is preferably provided in real time to the person acquiring the images, i.e. the user, before image acquisition, i.e. during alignment of the camera to the patient’s eye.
  • the detection of malarial retinopathy using software-based methods preferably comprises descriptive labels that indicate the types of retinal lesions associated with MR that guide the user on determining the presence and/or severity of MR and risk of CM.
  • An embodiment of the present invention relates to a system and method for acquiring retinal images for automatic screening by machine-coded algorithms running on a computational unit.
  • Th mbodiment comprises a method that allows a user to use a retinal camera to record retinal images with adequate image quality according to certain rules that assess the quality of the images for further machine-based or human-based processing and/or clinical assessment.
  • a retinal camera of this embodiment preferably comprises a contact lens with wide FOV (>120-degrees) that can allow for imaging of central and peripheral retina using no more than three shots.
  • MR detection methods of the present invention comprise detection of retinal lesions, for example, retinal whitening, hemorrhages, and vessel discoloration.
  • Another embodiment of the present invention describes a machine-based method for integration of detection of three lesions to assess the likelihood of presence of MR.
  • a set of thresholds on the likelihood values can be determined so that images can preferably be assigned MR presence- associated labels such as“MR present” or“MR absent”, and others as explained herein.
  • MR lesion detection methods of the present invention comprise machine-based transformations that can be carried out onboard a computing device integrated directly into a retinal camera.
  • the present invention also relates to methods to assign quantitative probability values or labels to sets of one of more retinal images related to the likelihood of the presence of MR as determined by machine code.
  • FIG. 1 illustrates a flow of images, data, and analysis according to one embodiment of the present invention for image quality and malarial retinopathy detection.
  • FIG.2 depicts a more detailed view of the processes in FIG.1.
  • FIG. 3(A) depicts an embodiment of the system of the present invention comprising a retinal camera with embedded computing unit module, chin rest, camera base, and external fixation light.
  • FIG. 3(B) depicts an alternate embodiment of the system of the present invention comprising a retinal camera with embedded computing unit module, chin rest, and wall-mounted articulated camera mount.
  • FIG. 4(A-D) illustrates a set of sample images adapted from digital retinal images of adequate quality of a subject with advanced diabetic retinopathy.
  • FIG. 4(A) is an illustration of an image taken with the retinal camera of FIG. 3.
  • FIG. 3(A) is an illustration of an image taken with the retinal camera of FIG. 3.
  • FIG. 4(B) is an illustration of an image taken with a Canon CR 1 Mark II desktop retinal camera, which also shows retinal vessels (401) and an optic disc (402).
  • FIGS. 4(C) and 4(D) show the details of micro-aneurysms (403), taken with the retinal camera of FIG. 3 and Canon CR 1 Mark II desktop retinal camera, respectively. The images from both cameras show the same level of detail and are adequate for further processing or evaluation.
  • FIG. 5 illustrates a design of a contact lens with wide field of view into the retinal camera of FIG. 3.
  • FIG. 5(A) shows the optical lens system
  • FIG. 5(B) shows the contact lens design inside the retinal camera of FIG.3, along with the light ray tracing (dashed lines).
  • FIG. 6 illustrates a set of sample images of an artificial eye.
  • FIG. 6(A) is an image taken with the retinal camera of FIG. 3 without a wide field of view (FOV) lens, which shows retinal structures such as retinal vessels (601), optic disc (602), and retinal lesions (603).
  • FIG. 6(B) is an image taken with the retinal camera of FIG.
  • FOV wide field of view
  • FIG. 7 illustrates a set of sample retinal images adapted from a digital retinal image of a subject with malarial retinopathy taken with a retinal camera.
  • FIG. 7(A) is an image showing an arrow pointing towards the retinal whitening (701).
  • FIG. 7(B) is an image showing the arrows pointing towards the retinal hemorrhages (702).
  • FIG. 7(C) is an image showing the arrows pointing towards the vessel discoloration (703).
  • the images show an adequate level of detail required for further processing or evaluation.
  • FIG.8 depicts an embodiment of the steps for the whitening detection phase 410.
  • FIG. 9 depicts an example image showing annotation of retinal whitening (dense striped pattern) and annotation of camera reflex (sparse striped pattern).
  • FIG. 10(A) depicts an example image showing retinal whitening as a dense striped pattern (1001).
  • FIG. 10(B) depicts a ground truth annotation (1002) for whitening in image in FIG. 10(A).
  • FIG. 10(C) depicts a likelihood map generated by the algorithm (1003) for detection of whitening in FIG. 10(A), using whitening detection step 410.
  • FIG.11 depicts the steps of hemorrhage detection step 420 according to one embodiment of the present invention.
  • FIG. 12(A) depicts an example image showing hemorrhages (1201).
  • FIG. 12(B) depicts a ground truth annotation (1202) for hemorrhages in image in FIG. 12(A).
  • FIG. 12(C) depicts a likelihood map generated by the algorithm (1203) for detection of hemorrhages in FIG. 12(A), using hemorrhage detection step 420.
  • FIG.13 depicts the steps for vessel discoloration detection phase 430.
  • FIG. 14(A) depicts an example image showing vessel discoloration (1401) and other retinal structures such as optic disc (1405) and normal retinal vessels (1406).
  • FIG.14(B) depicts a ground truth annotation as dashed lines (1402) for vessel discoloration in image in FIG.14(A).
  • FIG.14(C) depicts the discolored vessels detected (dotted lines: 1404) or missed (dashed-dotted lines: 1403) by the algorithm for detection of vessel discoloration in FIG. 14(A), using vessel discoloration detection phase 430.
  • FIG.15 depicts the steps of Malarial Retinopathy detection phase 450 according to one embodiment of the present invention.
  • FIG. 16(A) depicts an example image from the dataset showing an optic disc (1601) and the MR lesions.
  • FIG. 16(B) depicts a ground truth annotation for MR lesions in image in FIG. 16(A).
  • FIG.17 depicts elements in a computing unit 600 according to one embodiment of the present invention.
  • FIG.18 depicts elements in an Image quality analyzer 300 according to one embodiment f th present invention.
  • FIG.19 depicts computing unit processing 600 using a system on a chip (SOC) 650 according to one embodiment of the present invention.
  • FIG.20 depicts computing unit processing 600 using a single board computer (SBC) 640 according to one embodiment of the present invention.
  • SBC single board computer
  • FIG.21 depicts computing unit processing 600 using a personal computer (PC) 660 according to one embodiment of the present invention.
  • FIG.22 depicts computing unit processing 600 using cloud computing 670 according to one embodiment of the present invention.
  • FIG.23 depicts computing unit processing 600 using a central server 680 according to one embodiment of the present invention.
  • DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION An embodiment of the present invention provides retinal screening to human patients at risk of malarial retinopathy, but the machine-based methods can be modified to screen for retinal diseases comprising, diabetic retinopathy, macular degeneration, cataracts, and glaucoma.
  • a user uses an embodiment of the present invention to record images of the fundus of adequate image quality to allow further methods to assign labels to the images in relation to the likelihood of the presence of malarial retinopathy.
  • further methods assign labels such as“MR present” or“MR absent”, or“Indeterminate” to sets of acquired fundus images.
  • Those fundus images labeled“MR present” or “Indeterminate” can be forwarded to a reading center for visual inspection by human experts according to telemedicine methods or to a cerebral malaria expert for further diagnostic investigation or treatment.
  • Other applications of the present invention will become obvious to those skilled in the art.
  • Embodiments of the present invention enable users to obtain and record retinal images of adequate image quality to allow further methods to execute machine-based and/or human-based clinical assessment.
  • Fundus cameras have the shortcoming of not comprising real time feedback on image lity and therefore they are of limited clinical utility.
  • Traditional fundus cameras may comprise visual aids of focus and working distance but these aids have the shortcoming of not providing information to the user on other image quality features comprising field of view, illumination and alignment artifacts, media opacities, debris in the optical path, and others that will become obvious to others skilled in the art.
  • the present invention fills an unmet need for a system and methods that ensures adequate quality images at the point of care, thus eliminating the need for re-scheduling and re-imaging of a patient because of inadequate quality images.
  • None of the fundus cameras has an automatic MR screening system that is needed in the malaria affected countries where immediate care is required but accurate diagnosis by a specialist is not easily available.
  • Embodiments of present invention meet this need with its integrated machine-based methods for automatic MR/no-MR determination.
  • a retinal camera and imaging protocol are described as non-limiting examples. It will become obvious to those skilled in the art that other retinal cameras and other imaging protocols can be employed without affecting the applicability and utility of the present invention.
  • FIG. 1 a flow chart showing phases of automatic processing of retinal images in conjunction with an imaging device in accordance with an embodiment of the present invention is illustrated.
  • Automatic processing is defined as assessing the quality of the retinal images, providing feedback to a user regarding said image quality, and assessing the likelihood of presence of malarial retinopathy.
  • the entire automatic processing step can be completed within 3 minutes when the software algorithms are optimized for speed and run on computationally efficient machines, whereas a human grader would take up to 20-30 minutes for assessment and detection of malarial retinopathy and could t ocess the retinal image in the time frame of the present invention.
  • the retinal camera is integrated with MR detection algorithms to process images captured at the point of care in near real-time.
  • the retinal imaging process begins with the imaging target 100, either an artificial eye or a human subject clinically diagnosed with CM, in partially or fully comatose state.
  • the imaging target can take many forms including a human eye or an artificial target simulating a human eye. Other imaging targets will be known to those skilled in the art.
  • the next step of an embodiment of the retinal imaging process is the image acquisition using a retinal camera 200.
  • One or more embodiments of the present invention include methods to optimize retinal image quality using selection of alignment, focus, and imaging wavelengths.
  • Image acquisition can take many forms including, capturing an image with a digital camera and capturing an image or a series of images as in video with a digital video camera. Retinal cameras typically employ two t in acquiring retinal images.
  • an alignment illumination source is used to align the retinal camera to the area of the retina that is to be imaged.
  • this alignment illumination source can comprise a visible source, while in non-mydriatic retinal cameras a near infrared illumination source that does not affect the natural dilation of the pupil is preferably used.
  • the second step is image acquisition, which comprises a visible illumination source driven as a flash or intense light. This flash of light is intense enough that allows the retinal camera to capture an image and short enough to avoid imaging the movements of the eye.
  • Another embodiment of the present invention is a retinal camera that uses near-infrared light for the alignment illumination source and generates a live, i.e.
  • the retinal camera preferably can capture at least one longitudinal point in time. This phase may utilize techniques and devices disclosed in“Portable retinal imaging for eye disease screening using a consumer-grade digital camera” reported by Barriga et al. in the Proceedings of Photonics West BIOS, San Francisco, CA, 2012; 2)“Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment” reported by Soliz et al. in the Proceedings of Photonics West BIOS, Proceedings Vol.
  • the retinal imaging processing further utilizes the Image Quality Analyzer 300 according to one embodiment.
  • the video signal of the retina is preferably automatically analyzed according to one of more imaging protocols and image quality criteria to assess whether an image is in ment with an imaging protocol, and the extent of such agreement.
  • the video signal of the retina is further analyzed for detectable image quality defects, and the extent of such defects.
  • the video signal and one or more visual aids are preferably shown to the photographer to guide the photographer on changes to the camera alignment that may result in improved image quality.
  • a retinal image is acquired, such image is preferably automatically analyzed according to imaging protocol and/or image quality criteria to detect whether such image is in agreement with the imaging protocol, includes detectable image quality defects, and the extent of such agreement and defects.
  • the acquired retinal image is preferably assigned to a class indicative of its clinical utility, the extent and nature of detected image quality defects, and whether such image should be retaken. If after several tries the photographer is unable to obtain a good quality image, an output of “inadequate” is returned and a recommendation for indirect ophthalmoscopy is given.
  • the next step of the retinal imaging process utilizes the Malarial Retinopathy Detection 400 according to one embodiment.
  • one or more retinal images are automatically analyzed according to one or more malarial retinopathy risk assessment criteria to detect whether such image or set of images include signs of presence of malarial retinopathy.
  • the image or set of images are preferably assigned to a class indicative of the presence of malarial retinopathy.
  • the set of classes may include a class corresponding to an image that cannot be analyzed due to inadequate image quality.
  • a further step of the retinal imaging process involves Reading Center 500 according to one embodiment of the present invention.
  • a retinal image or a set of retinal images labeled as“Malarial Retinopathy present” as described below are visually analyzed by human experts according to one or more Malarial retinopathy classification criteria to detect whether such image or set of images include detectable MR lesions and/or features and the extent of such lesions and/or features, associated with MR. On the basis of this analysis the image or set of images are assigned to a class indicative of the presence of MR, the extent of disease, and guidance for follow up care. [0065]
  • the steps comprising Image Quality Analyzer 300 and MR detection phase 400 are preferably implemented by machine-coded mathematical algorithms that run on Computing Unit 600.
  • the Computing Unit 600 preferably comprises one or more types of random-access and read-only memory, one or more types of processors, and one or more types of human interface units including but not limited to keyboard, touch screen, display, and light indicators.
  • the Computing Unit 600 carries out one or more real time image transformations, analyzers, and classifications according to mathematical algorithms in sentations outside the space and time representations understood by the human brain and at processing speeds superior and/or faster to what human brains can do.
  • FIG. 2 is a detailed view of the retinal imaging process.
  • the Image Target phase 100 comprises the object to be imaged. There are many objects that can be imaged.
  • the preferred imaging target is a human eye 110 with a pupil size of at least 3.7mm and clear media, e.g. no cataracts.
  • An alternative imaging target is an artificial eye 120 comprising one or more optical apertures simulating a human pupil, an internal spherical cavity simulating the interior of the human eye, and an internal surface that can simulate the physiological features of the human retinal.
  • the internal cavity of an artificial eye can also be made of a material with reflectivity characteristics that can be used to characterize and/or calibrate a retinal camera.
  • Other alternative imaging targets will be known to those skilled in the art.
  • Retinal Camera 200 is used to obtain and record digital images of the imaging target 100 according to one embodiment of the present invention.
  • An embodiment of retinal camera 200 is an imaging sensor that can capture the light reflected from imaging target 100.
  • said imaging sensor can be a commercial camera such as a digital single lens reflex (DSLR) camera 210.
  • DSLR digital single lens reflex
  • the optical element of Retinal Camera 200 that enables imaging the fundus of the eye is main lens 220.
  • the main lens is a +28 Diopter indirect ophthalmoscopy lens which results in an effective field of view of approximately 30 degrees on the retina.
  • a +40 Diopter main lens results in an effective field of view of about 45 degrees on the retina.
  • a contact lens with wide field of view results in an effective field of view of greater than 120 degrees on the retina.
  • the system optionally comprises a mechanism to switch or append between two or more main lenses in order to change the effective field of view of the retinal camera.
  • a mechanism to switch or append between two or more main lenses in order to change the effective field of view of the retinal camera.
  • Other methods to use and combine other types of main lenses will be known to those skilled in the art.
  • multiple illumination sources including a near infrared source 230 and one or more (preferably at least two) sources of visible light 240 are preferably combined by a dichroic beam splitter, i.e. a hot-mirror, in order to simplify the design and construction of the retinal camera.
  • the illumination sources preferably comprise LEDs.
  • the near infrared source 230 is preferably used to align the camera to the area of the fundus to be imaged without affecting the natural dilation of the subject’s pupil.
  • the one or more visible light sources 240 are commanded to emit a flash of light that illuminates the retina momentarily and allows the l camera to capture an image of the retina.
  • the wavelength and illumination intensity of the visible sources 240 are preferably selected to optimize the contrast offered by the characteristic reflectance and absorption of the natural chromophores in the retina. In this way illumination entering the pupil is minimized while captured reflected light is maximized adding to the comfort of the subject and reducing the effect of the light on the subject’s natural pupil dilation.
  • the near infrared illumination source 230 may be mounted on a passive heat sink.
  • the visible light illumination sources 240 may be mounted on an aluminum plate and operated in pulsed mode (20ms-1s).
  • the retinal camera 200 comprises a near infrared illumination source 230 with a preferable central wavelength of 850nm (CW). At this wavelength, the light penetrates the different layers of the retina up to the choroid, and alignment of the camera is using this layer.
  • the retinal camera 200 comprises a near infrared illumination source 230 with a preferable central wavelength of 760nm (CW), which enables the camera to be aligned against inner layers of the retina such as the retinal vasculature.
  • the retinal camera 200 comprises a near infrared illumination source 230 comprising selectable central wavelengths, e.g. between 760nm and 850nm, in order to align the camera against retinal features on different layers of the retina.
  • selectable central wavelengths e.g. between 760nm and 850nm.
  • Other methods to illuminate the retina for alignment will be known to those skilled in the art.
  • the retinal camera 200 comprises a white light source such as an LED used as the visible light illumination source 240.
  • the color temperature of the white light LED produces the color balance in the resulting retinal image.
  • the retinal camera 200 comprises a method to select among two or more color temperatures of the white LED illumination source 240 in order to obtain different degrees of color balance among retinal features. For example, retinal vasculature is more prominent in the green channel whereas differences in retinal pigmentation due to melanin may call for different color temperatures in order to compensate for these differences.
  • the retinal camera 200 comprises a multi-color illumination source 240, such as a tri-color LED illumination source and a method to change the intensity of each of the multiple colors in order to optimize color balance in acquired retinal images. Other methods to use and combine other types of visible light illumination sources will be known to those skilled in the art.
  • the retinal camera 200 comprises a beamsplitter 250 to allow light to be directed to the retina and then captured by the image detector 210.
  • the retinal camera 200 comprises a nano-wire polarizer beamsplitter 250.
  • FIG. 3(A) illustrates an example of retinal camera 200 assembly that can be used to provide spatial stabilization between the imaging target 100 and the retinal camera 200. This example comprises chin rest 280 and camera mount 290.
  • the retinal camera 200 comprises an external fixation light 282 attached to the chin rest to allow precise control of ubject’s gaze in order to image different areas of the fundus.
  • FIG. 3(B) illustrates an example of retinal camera 200 that comprises a wall-mounted camera base 292 to enable relative stabilization between the imaging target 100 and the retinal camera 200.
  • FIGS. 4(A) and 4(B) are example images of adequate quality acquired with two different retinal cameras from two diabetic individuals presenting different stages of diabetic retinopathy.
  • the retinal camera 200 comprises a lens with 40-degree field of view (FOV).
  • FIG. 5(A) shows the optical design of the contact lens in conjunction with other lenses
  • FIG.5(B) shows the layout with a position of the contact lens inside the retinal camera 200.
  • the retinal camera 200 comprises an optical contact lens 260 with wide FOV such as Mainster-PRP 165 optical lens (Ocular Instruments) producing an FOV of 165-degrees.
  • the resulting optical ray tracing of the design is shown in FIGS. 5(A) and 5(B).
  • the wide FOV optics preferably consists of a first element in contact with the cornea made of poly methyl methacrylate bonded to a high refractive index glass contact lens.
  • the high power asphere, as the foreoptics produces a wide field aerial image of retina that is captured by the camera imaging system.
  • Increasing the FOV requires using a contact lens, but does not increase the size of the camera.
  • the optical design preferably presents with a pixel footprint resolution of ⁇ 5 microns, which may be sufficient to detect the smallest retinal feature needed for MR lesion detection, which is the width of the smallest retinal vessel wall, a feature that measures ⁇ 15 microns.
  • the retinal camera 200 comprises a contact lens 260 with wide FOV such as Mainster-PRP 165 integrated into the retinal camera.
  • FIGS. 6(A) and 6(B) show the example pilot images of an artificial eye captured without (FOV ⁇ 32 ⁇ ) and with the wide FOV lens 260 (FOV 165 ⁇ ), respectively.
  • a yellow dashed-circle in image in FIG. 6(B) s the 32 ⁇ FOV captured by the camera without the wide FOV lens 260.
  • FIG. 6(A) shows retinal structures such as retinal vessels (601), optic disc (602), and retinal lesion (603).
  • the retinal camera 200 preferably further comprises an electronic transmission system to transmit images or video feed 270 of the areas of the retina illuminated by the near infrared illumination source 230 and the visible illumination sources 240 to the next phase of the Retinal Imaging Process.
  • Example systems can include wired transmission according to the HDMI standard, wireless transmission according to the Wi-Fi standard, and solid state memory media such as Secure Digital cards. Other transmission methods will be known to those skilled in the art.
  • Image quality analyzer 300 and Malarial Retinopathy Detection system 400 preferably perform real time and automatic assessment of retinal images transmitted by the transmission system 270.
  • the Image quality analysis 300 and MR detection process 400 preferably use one or more methods implemented as machine-coded algorithms and running on a computing unit 600 comprising one or more types of host memory 610, one or more types of host processors 620, and one or more types of graphics processing units 630, as shown in FIG. 17.
  • Examples of the methods used in the Image quality analysis 300 and MR detection process 400 include those found in scientific calculation software tools such as Matlab and its associated toolboxes. Other scientific calculation software tools will be known to those skilled in the art.
  • the Image quality analysis 300 and MR detection process 400 preferably use one or more methods implemented as machine-coded algorithms and running on a computing unit 600 comprising an embedded computer such as a system on a chip (SOC) 650 such as the one shown in FIG. 19, that can be integrated with the retinal camera 200.
  • the embedded computer comprises hardware such as the ODROID-U3 processing board, a Linux embedded computer with a 1.7 GHz, Cortex-A9 Quad-Core processor, 2 GB RAM memory, common I/O ports (HDMI, USB, Ethernet, Audio), and 2.2 inch, 240 x 320 TFT-LCD display.
  • This portable processing board can preferably show the images being processed and the results of the processing.
  • the machine-coded algorithms for MR detection system 400 are preferably ported to C/C++ and then implemented on the processing board.
  • the processing board is of sufficiently small size (3”-by-2.5”) to be integrated with the retinal camera without the need for extensive modifications or increase in system size.
  • the processor is equipped with Ethernet, GPIO, and USB ports that can be utilized for communication with the camera for automatic download of images as well as with a central computer for storing results.
  • Image Quality Analyzer 300 preferably performs real time and t matic assessment of retinal images transmitted by the transmission system 270.
  • the Image Preparation process 310 preferably comprises one or more image processing steps such as decoding of video into images, image resizing, image enhancement, and color channel transformation.
  • the Image Preparation process 310 preferably uses one or more methods implemented as machine- coded algorithms and running on a computing unit 600 comprising one or more types of host memory 610, one or more types of host processors 620, and one or more types of graphics processing units 630. Examples of the methods used in the Image Preparation process 310 include those found in scientific calculation software tools such as Matlab and its associated toolboxes. [0080] Referring to FIG. 18, the Analyze Image Quality phase 340 comprises one or more tools or applications to assess one or more image quality characteristics in retinal images. Said tools or applications are preferably implemented as machine-coded algorithms running in a computing unit 600.
  • the Analyze Image Quality phase 340 determines retinal image alignment, the presence and extent of crescents and shadows, and a measure of quantitative image quality according to expert visual perception.
  • the results of the Alignment, Crescent and Shadow, and Quantitative Image Quality phases are preferably passed to the Classification phase 350 for further processing.
  • the Image Quality Analyzer phase 300 may utilize techniques and devices disclosed in U.S. Application No.14/259,014.
  • the Alignment phase preferably automatically determines whether a retinal image complies with an imaging protocol. If a retinal image does not comply with said imaging protocol the Alignment phase determines that said image is misaligned.
  • An embodiment of the imaging protocol uses one retinal image with the optic disc in the center of the image (herein referred to as F1) and image with th f vea in the center of the image (herein referred to as F2).
  • the Alignment phase preferably uses one or more image processing steps implemented as machine-code algorithms running in computing unit 600, and preferably comprises the steps of field detection, macula detection, right/left eye detection, and alignment check.
  • the Field Detection phase preferably uses one of more machine-coded algorithms running in a computing unit 600 to determine whether a retinal image is F1 or F2.
  • the Field Detection phase determines the location of the optic disc and relates this location to areas on the retinal image where the optic disc should be located for F1 images and F2 images.
  • a region of interest with respect to the field of view (FOV) center is defined.
  • the region consists of a vertical band with width equal to two optic disc (OD) diameters. If the automatically detected optic disc location falls inside of this band, then the image is classified as F1. If the OD falls outside this band, it may not be a F1 image since it is farther from the FOV center, relative to macula.
  • the OD is outside of the band on either side, there is a higher probability of macula being inside the band on the other side (given that the macula is approximately 2 optic disc diameters away from the OD, and the width of the band is 2 OD). Thus the macula is closer to the FOV center relative to the OD, and therefore it is a F2 image.
  • Macula Detection step uses one or more machine-coded algorithms running in a computing unit 600 to determine the location of the macula within a retinal image.
  • An embodiment of Macula Detection step preferably comprises histogram equalization step to enhance image contrast, pixel binarization step to eliminate gray scales, pixel density map step to determine the region with the highest pixel density within the image, location constraints step based on angular position with respect to optic disc location, and probability map step to determine the most likely candidate pixels represent the macula.
  • the most likely pixel on step is then preferably assigned the machine-calculated macula location.
  • a probability map results from the steps carried out by the Macula Detection step on a retinal image.
  • Left/Right Eye Detection step preferably uses one or more machine-coded algorithms running in a computing unit 600 to determine the side of the face a retinal image belong to.
  • a Left/Right Eye Detection step preferably analyzes the vessels within the optic disc, and step compares the vessel density in the left half of the optic disc to the vessel density in the right half of the optic disc. The half with the lower vessel density is labeled as the temporal edge of the optic disc. If the temporal edge of the optic disc is on the right side of the retinal image then the retinal image is labeled as“left”. If the temporal edge of the optic disc is on the left side of the retinal image then the retinal image is labeled as“right”.
  • Alignment Check step preferably uses one or more machine-coded algorithms running in a computing unit 600 to determine whether a retinal image is correctly aligned.
  • An embodiment of Alignment Check comprises steps to check that the macula position is at an angle between 0 degree and 20 degrees below the position of the optic disc with respect to the horizontal axis of the retinal image, to check that the distance between the optic disc center and the macula is more than two optic disc diameters and less than three optic disc diameters, to check that the optic disc and macula are positioned within one-third and two-thirds sections of the vertical image size, and to check that the macula is positioned within the vertical region limited by the temporal vessel arcades.
  • a retinal image passes the aforesaid checks, it is assigned two labels, one for eye side and one for field of view.
  • the eye side label is preferably“OD” if it is a right eye image or“OS” if it is a left eye image.
  • the field of view label is preferably“F1” if it is a disc-centered image or“F2” if it is macula-centered image.
  • An image that fails one or more checks is preferably assigned the label“Misaligned” or“unacceptable”.
  • Crescent/Shadow Detection step preferably uses one or more machine-coded algorithms running in a computing unit 600 to determine whether a retinal image includes crescents and/or shadows.
  • Crescents occur from iris reflection due to small pupils or misalignments by the photographer.
  • the reflected light shows up as a bright region towards the edge of the image.
  • the brightness of the crescent is generally strongest at the boundary of the image and fades toward the center of the image. Shadows are caused by not enough light reaching the retina and creating dark regions on the images. Both crescents and shadows obscure parts of the retina that may have important clinical information.
  • embodiments of Crescent/Shadow Detection step comprise creating a mask that contains the retinal image, applying a gradient filtering algorithm to detect gradual changes in pixel value towards the center of the image, enhancing the gradient image, and determining whether the number of pixels in the third quartile of the gradient image occupy more than a preferable percent (e.g.
  • Crescent/Shadow Detection step comprise applying a pixel value threshold to the pixels of the retinal image, inverting the values of the thresholded image, counting the numbers of pixels within 10 percent of saturation, and comparing this number of pixels to 25 percent of the total number of pixels in the retinal image.
  • Embodiments of the present invention preferably provide an automated approach based on the classification of global and local features that correlate with the human perception of retinal image quality as assessed by eye care specialists.
  • Quantitative Image Quality step preferably uses one or more machine-coded algorithms running in a computing unit 600 to determine whether a retinal image includes image quality defects associated with visual perception of human experts.
  • Visual perception information comprises a dataset set of retinal images which have been assigned labels by expert readers of retinal images. Said labels are preferably“adequate” for retinal images that include adequate quality and “inadequate” for retinal images that do not include adequate quality. Adequacy comprises the expert’s subjective assessment of whether an image includes adequate quality to make a clinical determination including but not limited to presence of lesions that may indicate the presence of disease. The dataset of images and associated labels is used as a training set for quantitative classification. [0088] Feature extraction comprises calculation of mathematical image features of one or more categories. An embodiment of feature extraction comprises four categories: vessel density features, histogram features, texture features, and local sharpness features.
  • the overall image content such as lightness homogeneity, brightness, and contrast are preferably measured by global histogram and textural features.
  • the sharpness of local structure such as optic disc and vasculature network, is preferably measured by a local perceptual sharpness metric and vessel density. It will be known to those skilled in the art that different categories of mathematical image features and combinations thereof are possible and are all part of the present invention.
  • Vessel density features are used in order to check the sharpness of dark vessel structures since the performance of vessel segmentation is sensitive to the blurriness of vasculature.
  • a method based on Hessian eigen system and the second order local entropy thresholding can be used (H Yu, S Barriga, C Agurto, G Zamora, W Bauman and P Soliz, Fast Vessel Segmentation in Retinal Images Using Multiscale Enhancement and Second-order Local Entropy, accepted by SPIE medical imaging, Feb, 2012, San Diego, USA), Vessel segmentation is performed after illumination correction and adaptive histogram equalization to remove uneven lightness and enhance contrast in the green channel of the retinal images.
  • Vessel density is preferably calculated as the ratio of the area of segmented vessels over the area of field of view (FOV) in an image.
  • histogram features are extracted from the two or more color spaces, e.g. RGB: mean, variance, skewness, kurtosis, and the first three cumulative density function (CDF) quartiles. These seven histogram features describe the overall image information such as brightness, contrast, and lightness homogeneity. Computed first order entropy and spatial frequency may also be used to detect the image complexity.
  • RGB mean, variance, skewness, kurtosis
  • CDF cumulative density function
  • Texture features are used in identifying objects in an image.
  • the texture information can be obtained by computing the co-occurrence matrix of an image.
  • five Haralick texture features are calculated: the second order entropy, contrast, correlation, energy and homogeneity (Haralick, R.M., K. Shanmugan, and I. Dinstein, "Textural Features for Image Classification", IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-3, 1973, pp. 610-621).
  • Entropy measures the randomness of the elements of the matrix, when all elements of the matrix are maximally random, entropy has its highest value. Contrast measures the intensity difference between a pixel and its neighbor.
  • the correlation feature measures the correlation between the elements of the matrix.
  • a clear and sharp edge is important for good quality of an image.
  • a local sharpness metric the cumulative probability of blur detection (CPBD)
  • CPBD cumulative probability of blur detection
  • Average edge width and gradient magnitude have been used to measure blurriness of images, but these have been found to be too simplistic to directly correspond to human visual perception of blurriness, which is a complicated process.
  • An embodiment of the local sharpness feature step uses the cumulative probability of blur detection (CPBD) at every edge (N. D. Narvekar and L. J. Karam, "A No-Reference Image Blur Metric Based on the Cumulative Probability of Blur Detection (CPBD)," IEEE Transactions on Image Processing, Vol. 20 (9), 2678-2683, Sep. 2011).
  • JNB Just Noticeable Blur
  • JND Just Noticeable Difference
  • the cumulative probability is based on the sum of the blur probabilities that are below 63%.
  • the CPBD value is therefore negatively correlated to the edge blurriness.
  • the image is divided into blocks of a size of 64x64, and the ratio of edge blocks, in which the number of edge pixels is more than a certain threshold, over the number of all blocks, is used as another feature since some hazy retinal images due to cataract may appear foggy and misty with sharp edge underneath.
  • Quantitative Classification preferably comprises a dataset of reference labels of image quality and associated mathematical features to produce a mathematical predictor of image quality. Further, Quantitative Classification preferably comprises a mathematical predictor of image quality to predict the image quality label of an image using said image's mathematical features.
  • a predictor of image quality is constructed by determining the relationship between the reference set of image quality labels and their associated mathematical features. To predict the image quality label of a retinal image, the predictor of image quality uses said relationship to assign said image a label of either“adequate” or“inadequate” on the basis of the mathematical features of said image.
  • An embodiment of the mathematical predictor of image quality comprises a partial least squares (PLS) classifier.
  • PLS is a very powerful method of eliciting the relationship between one or more dependent variables and a set of independent (predictor) variables, especially when there is a high correlation between the input variables.
  • the same feature when applied to different color channels tends to be highly correlated even if their magnitudes are quite different.
  • PLS finds a lower dimensional orthogonal sub-space to the multi-dimensional feature space and provide robust prediction. It will be known to those skilled in the art that other prediction models exist and can be used alone or in combinations to obtain labels of image quality.
  • IQ Classification phase 350 is preferably implemented as machine-coded algorithms running in a computing unit 600 ⁇ and preferably assigns a label to a retinal image that is indicative of its image quality and suggestive of an action that the photographer can take, such as“Valid” or“Invalid”, “acceptable” or“unacceptable”,“adequate” or“inadequate”, or any other label obvious to anyone skilled in the art, to an RGB retinal image on the basis of the results from the Analyze Image Quality phase 340.
  • a label“Valid” is preferably assigned when the Crescent/Shadow step outputs an“adequate” label, the alignment check step outputs an“acceptable” label, and the Quantitative Classification outputs an “ d uate” label.
  • the Malarial Retinopathy (MR) Detection step 400 preferably processes one or more retinal images to determine whether said images include signs of MR.
  • the Malarial Retinopathy Detection step 400 preferably comprises one or more machine-coded mathematical algorithms running in a computing unit 600 to assign a label to a retinal image indicative of the presence or absence of MR.
  • the Malarial Retinopathy Detection phase 400 comprises one or more tools or applications to assess the presence of one or more MR retinal lesions or retinal abnormalities in retinal images. Said tools or applications are preferably implemented as machine-coded algorithms running in a computing unit 600.
  • a set of Reference Images 405 determines the valid retinal images for further processing
  • the Retinal whitening detection step 410 assess the likelihood of presence of retinal whitening and whitening location in the retinal image
  • the Hemorrhage detection phase 420 assesses the likelihood of presence of hemorrhages and hemorrhage location in the retinal image
  • the Vessel discoloration detection step 430 detects likelihood of presence of vessel discoloration and vessel discoloration location in the retinal image.
  • the results of the retinal whitening detection 410, Hemorrhage detection 420, and Vessel discoloration detection 430 steps are preferably passed to the Lesion based classification phase 440 to assess the likelihood of presence of MR, which is preferably passed to the MR detection phase 450 to assess the presence or absence of MR based on quantitative likelihood value for presence of MR determined by a classifier.
  • the set of Reference Images 405 preferably comprises images containing a plurality of plausible grades of detectable MR and comprising also cases of no detectable MR. As shown by the arrows in FIGS.
  • the set of Reference Images 405 preferably further comprises descriptive labels according to the grades of detectable MR and these may be referred to as ground truth.
  • the retinal whitening detection step 410 comprises one or more machine-coded algorithms running in a computing unit 600 to assess the presence of whitening and assess the location of whitening within a retinal image or set of images.
  • the whitening detection step 410 preferably comprises of a pre-processing step 411 to minimize camera reflex, marker-controlled watershed segmentation step 415 to form image splats, feature extraction step 416 to determine and extract image features indicative of whitening, feature classification step 417 using a partial least square classifier, false positive reduction step 418, and image classification step 419 to assess the most likely class of image representing if the image contains retinal whitening.
  • the most likely class assessed in step 419 is then preferably assigned to the image or set of images.
  • FIG.10.C illustrates an example of the probability map (1003) resulting from the steps carried out by the Whitening Detection phase 410 on a retinal image.
  • the pre-processing step 411 preferably automatically minimizes the effect of camera reflex in the retinal image, therefore, decreasing the number of false positives resulting from imaging artifacts such as reflex from the internal limiting membrane, as the color features of whitening largely overlap with that of the reflex.
  • a true whitening appears as creamy or fuzzy white in color (dense striped annotations in FIG. 9), whereas a reflex presents with shiny and bright white color (sparse striped annotations in FIG. 9).
  • step 412 comprises one or more image processing algorithms to extract color features from the color spaces such as green (RGB color space), k (CMYK color space), X and Z (CIE-XYZ color space), and‘L’ (HSL color space).
  • the color spaces preferably provide a significant distinction between a true whitening region and reflex beyond what a human observer can distinguish.
  • the color channels are combined using a pixel-based image multiplication operation to improve contrast between true whitening and reflex.
  • white top-hat transform is applied to the contrast-improved image, which highlights the reflex and separates it from true whitening (transform image).
  • the pre-processing step 411 also comprises of a textural feature based processing step 413 which preferably uses textural features to distinguish between true whitening and reflex such as but not limited to, smooth and fuzzy texture of whitening versus the sharp texture of reflex with strong edges, which are preferably used by a retinal reader in differentiating the whitening from reflex.
  • This step extracts features using the textural filter functions such as but not limited to, entropy, range, and standard deviation, applied to the top-hat transformed image obtained at the output of step 412.
  • the t t e-filtered image obtained at the output of step 413 is post-processed in step 414 for contrast enhancement using contrast equalization technique and such others, which highlights the reflex region.
  • the contrast-equalized image is subtracted from the original image using a pixel-based image subtraction operation, and subjected to illumination normalization, Gaussian smoothing, and noise removal.
  • the preferable outcome of step 414 minimizes the reflex and preserves true whitening.
  • the output image obtained in step 414 is used for further processing comprising one or more image processing algorithms for assessment of whitening.
  • the watershed segmentation step 415 preferably uses techniques such as but not limited to, marker controlled watershed segmentation, applied to the green channel of the output image in step 414.
  • the step 415 divides the image into number of watershed regions called splats based on region homogeneity (image regions with similar intensity i l ).
  • the step 415 preferably uses gradient magnitudes of the green channel at multiple scales and utilizes their maximum for segmentation. Furthermore, the watershed segmentation of the gradient magnitude image is performed, which generates the splats.
  • the feature extraction step 416 preferably extracts multiple color-, morphology-, and texture-based features from each of the splats of processed image for color channels such as‘k’ (CMYK color space), green (RGB color space), and‘X and Z’ (CIE- XYZ color space); to form a feature vector.
  • a feature vector is a set of features that describe color, textural, morphological properties etc. of image structures, and are represented in a quantitative format comprising of continuous or discrete as well as signed (+/-) or unsigned values.
  • the features included in a feature vector follow the standard definitions of image properties that are universally accepted.
  • the feature vector used for the detection of whitening preferably comprises: 1) Haralick textural features such as contrast, correlation, energy, homogeneity, entropy, dissimilarity, and other textural features obtained from a gray-level co-occurrence matrix (22 features), 2) Statistical features: median, standard deviation, kurtosis, skewness, moments, and percentile at various values (10 features), 3) Morphological features: Eccentricity, Solidity, Bounding box.
  • the feature vector preferably comprises a multitude of features but other significant color and textural features can be included.
  • the classification step 417 preferably integrates the extracted feature vector for classifier training and testing.
  • each splat in the test images is preferably assigned a probability of being inside the whitening region (Figure 10(C): 1003).
  • FIG. 10(A) shows the corresponding retinal image with whitening depicted as a dense striped pattern (1001) and
  • FIG. 10(B) shows ground truth annotated for whitening (1002).
  • the probability map can preferably be thresholded at various values to assess whitening with different sensitivity/specificity characteristics.
  • the false positive reduction step 418 f ably uses morphological features such as but not limited to, size, eccentricity, solidity, and bounding box of the detected candidates for whitening, to minimize false positive detections for whitening.
  • the image classification step 419 preferably assess the presence or absence of whitening in a given image using features extracted from the image such as but not limited to, 1) Maximum probability assigned to a splat in the image, 2) Sum of non-zero probabilities assigned to splats, 3) Product of non-zero probabilities assigned to splats.
  • PLS classifier preferably uses one or more image-based features extracted in step 419 to assess a likelihood for each image if it contains the whitening.
  • Hemorrhage detection step 420 uses one or more machine-coded algorithms running in a computing unit 600 to assess the presence of hemorrhages and assess the l tion of the hemorrhages within a retinal image or set of images.
  • Retinal hemorrhages of MR are predominately white-centered, intra-retinal and blot like hemorrhages similar to Roth spots, which most commonly involve the inner retinal layers. They can occur in isolation, in small groups, or in large clusters.
  • FIG. 12.A shows examples of MR hemorrhages (1201) and FIG.
  • an embodiment of hemorrhage detection step 420 preferably comprises of a supervised classification step 421, unsupervised detection step 423, feature extraction step 424 to determine and extract image color features indicative of hemorrhages within the image, a pixel-based mathematical operation step 425, a watershed segmentation step 426, a hybrid method in step 427, and a false positive reduction step 428, and image classification step 429 to determine the most likely class of image representing if the image contains hemorrhages.
  • a supervised classification step 421 comprises extraction of features including color, difference of Gaussians, local contrast; and preferably a use of k-Nearest Neighbor classifier to assess the presence of hemorrhages in a retinal image.
  • An unsupervised classification step 423 for detection of hemorrhages comprises extraction of color features in step 424 from various color spaces such as but not limited to,‘a’ (Lab color space),‘u’ (Luv color space),‘Q’ (YIQ color space),‘C’ (LCH color space).
  • a pixel-based mathematical operation step 425 preferably combines the aforementioned four color channels and such others using a pixel-based image multiplication operation where each output pixel’s value depends on the multiplication of corresponding input pixel values. The step 425 improves the t ast of hemorrhage-lesions relative to a retinal background.
  • a watershed segmentation step 426 comprises processing the contrast-improved image in step 425, to assess the gradient magnitude and form the watershed region boundaries.
  • the output of step 426 comprises a segmentation image preferably assigned with a probability to each watershed region of being inside a hemorrhage.
  • the unsupervised detection step 423 preferably detects the small/subtle hemorrhages; whereas, the supervised classification step 421 preferably assesses the presence and segments large hemorrhages.
  • a hybrid method in step 427 preferably comprises combining the outputs of both supervised classification step 421 and unsupervised detection step 423 that results in a hybrid image ( Figure 12.C).
  • the false positive reduction step 428 preferably uses morphological features such as eccentricity, solidity, and bounding box of the detected candidates for hemorrhages, to minimize false positive detections of hemorrhages.
  • the image classification step 429 preferably assesses a likelihood for h image if it contains the hemorrhages, by counting the number of hemorrhages detected in the given image, and preferably using the count to assess the likelihood value. Furthermore, a label is assigned to the given image or set of images based on the likelihood value, as “Hemorrhages present” or “Hemorrhages absent”. [00105] Vessel discoloration detection step 430, detailed in FIG.
  • FIG. 13 comprises one or more machine-coded algorithms running in a computing unit 600 to assess the presence of discolored vessels and assess the location of the vessel discoloration within a retinal image or set of images.
  • Retinal vessel changes are a feature uniquely associated with MR.
  • Vessel discoloration presents as discoloration from red to orange or white, primarily in the retinal periphery.
  • FIG. 14(A) shows an example of vessels presenting with discoloration due to MR (1401).
  • a human grader manually analyzes the color and intensity changes in retinal vessels and annotates the vessels that contain discoloration, which is considered as a ground truth.
  • FIG. 14(B) presents the ground truth annotation of discolored vessels as dashed lines (1402). As shown in FIG.
  • the vessel discoloration detection step 430 preferably comprises of a vessel segmentation step 431, normal vessel identification step 432, feature extraction step 435 to determine and extract image features indicative of vessel discoloration within the image, feature space reduction step 436, feature-based classification step 437 based on partial least square classifier, and image classification step 439 to assess the most likely class of image representing if the image contains vessel discoloration.
  • the most likely class in step 439 is then preferably assigned to the image or set of images.
  • FIG. 14(C) illustrates an example of the detection of discolored vessels (marked as dotted lines: 1404) resulting from the steps carried out by the Vessel Discoloration Detection step 430 on a retinal image.
  • the vessel segmentation step 431 comprises modifications to a previously developed vessel segmentation algorithm by incorporating information from green channel of RGB color space and ‘a’ channel of the CIE-Lab color space that represents a green-red component of the vessel pixels.
  • the ‘a’ channel is useful in segmenting small, subtle vessels and especially the discolored vessels which are not segmented accurately using previously developed segmentation algorithm based on green channel features alone.
  • the analysis of the ground truth annotations for vessel discoloration (dashed line annotation (1402) in FIG.14(B)) demonstrates that the wider vessels (FIG. 14(A): 1406) close to the optic disc (FIG. 14(A): 1405) do not present with discoloration, and can be used as an indicator of normal vessel color.
  • this information is used in step 432 for identifying the normal vessel color within each image, by automatically identifying the dark wider vessels closer to the optic disc, using contrast enhancement and intensity thresholding. Furthermore, in step 432 the mean value is calculated for all al vessel pixels, for each color channel. The algorithm then identifies the discolored vessels by calculating the intensity difference between each pixel of the remaining vasculature and the mean intensity value of normal vasculature.
  • the feature extraction step 435 comprises a feature extraction process based on three factors: 1) color of the discolored vessels is orange or white, 2) edge of the wall of discolored vessels has low contrast, and 3) regions close to the vessel wall get discolored first when the pathology appears.
  • the step 435 comprises extracting statistical image features such as median, standard deviation, kurtosis, skewness, moment, and 10th, 25th, 50th, 75th, 95th percentile of the content in the segment.
  • the statistical features are preferably extracted from various color spaces such as RGB, HSL, CIE-XYZ, CMYK, Luv, Lch; preferably in the center and wall areas of the vessels.
  • the feature vector also includes gradient features such as but not limited to, the gradient of the‘red’,‘green’ and‘a’ channels to measure the contrast of the vessel.
  • the feature space reduction step 436 comprises reducing the redundant features from the set of 440 features.
  • the total set comprises a plurality of image representations, e.g.
  • the step 436 uses ANOVA or other similar statistical techniques during the training phase, to assess which of the features are the most significant (p ⁇ 0.05) and reduces the feature space by 75%.
  • the feature-based classification step 437 uses one or more image processing techniques to normalize the features to have mean‘zero’ and‘unit’ standard deviation.
  • the normalized feature vector is integrated using a partial least squares (PLS) classifier to classify the discolored vessel segments using leave-one-out validation.
  • PLS partial least squares
  • FIG.14(C) depicts a retinal image ith automated detection of discolored vessels marked as dotted lines (1404), and also shows the discolored vessels missed by the algorithm, marked as dashed-dotted lines (1403).
  • the image classification step 439 preferably assesses a likelihood for each image if it contains the vessel discoloration, by counting the number of discolored vessels detected in the given image, and preferably using the count to assess the likelihood value. Furthermore, a label is assigned to the given image or set of images based on the likelihood value, as“Vessel discoloration present” or“Vessel discoloration absent”.
  • Lesion based classification step 440 detailed in FIG.
  • the Malarial retinopathy detection step 450 comprises the likelihood value from the lesion based classification step 440, to assign a label to an RGB retinal image or set of i es that is indicative of presence of MR and suggestive of an action that physician can take; such as “MR present” or“MR absent”,“High risk of MR” or“Low risk of MR”, or“indeterminate”, or any other label obvious to anyone skilled in the art. As shown in FIG.
  • MR detection step 450 preferably assigns a label“MR present” to an RGB retinal image, when the Quantitative PLS regression Classification step 444 outputs higher likelihood value (between 0 and 1) for presence of MR, based on the whitening detection in step 410 that outputs a likelihood of presence of whitening, the hemorrhage detection in step 420 that outputs a likelihood of presence of hemorrhages, vessel discoloration detection in step 430 that outputs a likelihood of presence of vessel discoloration.
  • the MR detection step 450 preferably assigns a label“MR absent” to an RGB retinal image, when the Quantitative PLS regression Classification step 444 outputs lower likelihood value (between 0 and 1) for presence of MR, based on the whitening detection in step 410 that outputs a likelihood of presence of whitening, the hemorrhage detection in step 420 that outputs a likelihood of presence of hemorrhages, vessel discoloration detection in step 430 that outputs a likelihood of presence of vessel discoloration.
  • the interpretation of the“likelihood value for presence of MR” as low or high, or a threshold classifying the likelihood value as low or high, is based on the retinal image dataset or the patient population under investigation, and is determined by one or more clinical protocols based on the requirement of sensitivity and specificity of MR detection. It will be known to those skilled in the art that different values of a threshold may be used in the present invention, according to one or more clinical protocols.
  • a retinal image is assigned a label“MR present” the user should confirm the presence of CM and determine treatment strategies.
  • a retinal image is assigned a label “MR absent” the user should investigate other causes of coma and parasitic infection.
  • the goal of MR detection software system is to reduce the false positive rate (FPR) by accurately diagnosing patients for the presence or absence of cerebral malaria based on retinal signs of M l ial retinopathy. This makes the system to operate preferably at high specificity. Therefore, the individual MR lesion detection algorithms are preferably tuned at high specificity settings.
  • FPR false positive rate
  • Experimental results from application of one of the embodiments of the present invention as described herein have shown its utility in detecting malarial retinopathy in retinal images and are explained in the next paragraphs.
  • a retrospective dataset of N 86 retinal color images obtained from pediatric patients clinically diagnosed for CM, was provided by the University of Liverpool.
  • the images were collected between years 2006 and 2014, at the Pediatric Research Ward at the Queen Elizabeth Central Hospital (QECH), University of Malawi College of Medicine (UMM), Blantyre, Malawi, Africa.
  • the images were captured i a Topcon 50EX retinal camera with a field of view (FOV) of 50-degree. All images were anonymized by the retinal reading center at the University of Liverpool and provided under a numeric identifier to the authors of this study.
  • the multiple images of each patient were stitched to create a mosaic image.
  • FIGS 16(A) and 16(B) show a retinal image and its ground truth annotation, respectively.
  • retinal whitening is annotated as dotted-striped pattern (1602), vessel discoloration as dashed lines (1604), and hemorrhages as solid striped pattern (1603).
  • the annotations were validated by an ophthalmologist with expertise in MR.
  • the annotated mosaic database was used to develop image processing algorithms to detect the three retinal signs of MR.
  • the algorithm performance was calculated in terms of image-based classification. For each image, the likelihood maps obtained for detection of a lesion were converted to a binary image using various thresholds and were compared against the annotated ground truth image. The image-based analysis determines the algorithm performance in classifying each retinal image as that with or without a lesion.
  • This classification technique considers a given image as positive detection, if at least one of the lesions annotated in the ground truth, is detected by the algorithm at a given threshold.
  • Image-based sensitivity is defined as the fraction of images marked positive in the ground truth, that were detected as positive by th lgorithm, while an image-based specificity was defined as the fraction of images marked negative in the ground truth, that were detected as negative by the algorithm.
  • ROC receiver operating characteristic
  • whitening is present in 67 images and 19 images present with no whitening.
  • three features were extracted from each image, i.e. 1) Maximum probability assigned to a splat in the image, 2) Sum of non-zero probabilities assigned to splats, 3) Product of non-zero probabilities assigned to splats.
  • the PLS classifier is used to classify each image as that with or without the whitening, based on th bove image features.
  • the classification of images with or without whitening achieved AUC of 0.81, with specificity of 94% and sensitivity of 65%.
  • the algorithm can also be operated at high sensitivity point of 78% at specificity of 65%.
  • the dataset has a distribution of 57 images with hemorrhages and 29 images without hemorrhages.
  • the average number of hemorrhages per image is 15.
  • the number of hemorrhages detected by algorithm are counted in the given image and the count is used to determine if the image has hemorrhages.
  • the classification of images with or without hemorrhages achieved an AUC of 0.89, with specificity of 96% and sensitivity of 73%.
  • the algorithm can also be operated at high sensitivity point of 100% at specificity of 78%.
  • the dataset contains 49 images with vessel discoloration and 37 without discoloration.
  • the number of vessel segments classified as containing discoloration are counted in the given image and the count is used to determine if the image has discolored vessels.
  • the image-based classification obtained an AUC of 0.85 with specificity of 100% and sensitivity of 62%.
  • the algorithm can also be operated at high sensitivity point of 90% at specificity of 66%.
  • the individual MR lesion detection algorithms were integrated using a PLS classifier for detecting the presence or absence of MR with high specificity, while maintaining a minimum sensitivity of 90% for MR detection.
  • the MR detection algorithm yielded an AUC of 0.97 with specificity of 100% at sensitivity of 95%.
  • the positive predictive value (PPV) for MR detection was 0.98.
  • Table 1 shows the performance of detection algorithms for vessel discoloration, hemorrhages, retinal whitening, and MR d t tion; in terms of AUC, and tuned at high specificity while maintaining moderate sensitivities, with the respective 95% confidence intervals (CI).
  • the AUCs for individual lesion detection in the range of 0.81 - 0.89 indicate the capability of respective algorithms in distinguishing between an MR pathology and retinal background.
  • the AUC of 0.97 for MR detection indicates that, given a random retinal image of a clinically diagnosed CM patient, the proposed MR detection system will correctly identify presence or absence of MR in about 97% of the patients.
  • This system operating at a high specificity setting for diagnosis of MR, at 100% specificity and 95% sensitivity means a large reduction in the false positive diagnosis of cerebral malaria cases.
  • the experimental results presented herein show an automated method for detection of MR lesions and a regression classifier that categorizes a patient-case into MR/no-MR, using retinal color images.
  • the software algorithms perform with sufficient accuracy that enables a highly specific detection of MR, which can improve the specificity and positive predictive value of CM diagnosis.
  • computer systems and communication infrastructure are composed of frame buffer (not shown) to the display monitor.
  • Computing Unit 600 preferably includes a main memory, for example random access memory (“RAM”), read-only memory (“ROM”), mass storage device, including an automated onsite and off-site backup system.
  • main memory for example random access memory (“RAM”), read-only memory (“ROM”), mass storage device, including an automated onsite and off-site backup system.
  • Computing Unit 600 may also include a secondary memory such as a hard disk drive, a removable storage drive, an interface, or any combination thereof.
  • Computing Unit 600 may also include a communications interface, for example, a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, wired or wireless systems, or other means of electronic communication.
  • Computer programs are preferably stored in main memory and/or secondary memory. Computer programs may also be received via communications interface. Computer programs, when executed, enable the computer system, particularly the processor, to implement the methods according to the nt invention.
  • the methods according to embodiments of the present invention may be implemented using software stored in a computer program product and loaded into the computer system using removable storage drive, hard drive or communications interface.
  • Embodiments of the present invention are also directed to computer products, otherwise referred to as computer program products, to provide software to the computer system.
  • Computer products store software on any computer useable medium. Such software, when executed, implements the methods according to one embodiment of the present invention.
  • Embodiments of the present invention employ any computer useable medium, known now or in the future.
  • Examples of computer ble mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory), secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, tapes, magnetic storage devices, optical storage devices, Micro-Electro-Mechanical Systems (“MEMS”), nanotechnological storage devices, cloud storage, etc.), and communication mediums (e.g., wired and wireless communications networks, local area networks, wide area networks, intranets, etc.).
  • primary storage devices e.g., any type of random access memory
  • secondary storage devices e.g., hard drives, floppy disks, CD ROMS, tapes, magnetic storage devices, optical storage devices, Micro-Electro-Mechanical Systems (“MEMS”), nanotechnological storage devices, cloud storage, etc.
  • communication mediums e.g., wired and wireless communications networks, local area networks, wide area networks, intranets, etc.
  • Image Quality Analyzer phase 300 and MR detection phase 400 preferably use machine- coded mathematical algorithms running in a Computing Unit 600.
  • Computing Unit 600 preferably comprises a self-contained unit comprising one or more human-machine interfaces, power supplies, memories, processing units, and storage units and is preferably connected directly to the Retinal Camera 200 through a communication medium that transports the live video feed 270. Further, a Computing Unit 600 is preferably integrated into the base of the Retinal Camera 200 such that said Computing Unit 600 does not add to the footprint and space requirements of the Retinal Camera 200.
  • An embodiment of Computing Unit 600 comprises one or more machine-implemented tools and applications to distribute, manage, and process retinal images transmitted by the retinal camera 200
  • the Computing Unit 600 further comprises one or more types of computing memory 610, computing processors 620, and graphic processing units 630.
  • FIG. 17 illustrates an embodiment of computing unit 600.
  • the computing memory 610 preferably comprises solid state and magnetic memory media to store the image data to be processed as well as data generated during intermediate steps in the Image Quality Analyzer phase 300 and the MR detection phase 400.
  • the computing processors 620 preferably execute machine-coded algorithms on solid state computer circuits according to the Image Quality Analyzer phase 300 and the MR detection phase 400.
  • One or more graphic processing units 630 preferably comprise two or more parallelized solid state computing cores to divide processing tasks into pieces that can be done in parallel in said computing cores.
  • FIG. 20 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 i l mented as a single board computer (SBC) 640 that can operate onboard the retinal camera 200.
  • SBC single board computer
  • FIG. 19 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 implemented as a system on a chip (SOC) 650 that can be located onboard the retinal camera 200.
  • FIG. 21 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 implemented in a personal computer (PC) 660 that can operate separate and independently from the retinal camera 200.
  • PC personal computer
  • FIG. 22 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 implemented as a cloud based system 670 through an Internet connection to a distributed data center 675 and operated separately and independently from the retinal camera 200.
  • FIG. 23 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 implemented as a centralized processing center or central server 680 through a wired, wireless, or internet connection to one or more dedicated server computers 685 and operated separately and independently from the retinal camera 200.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Systems and methods of detecting malarial retinopathy using color retinal fundus images by minimally trained persons, which includes a retinal camera for obtaining images of a fundus of a subject's eye, in combination with mathematical methods to assign real time image quality classification to the images, and mathematical methods to assign real time detection of malarial retinopathy in the images, for clinical interpretation by machine-coded and/or human-based methods. The classified images will be further processed if the classified images are of sufficient image quality for clinical interpretation by machine-coded and/or human-based methods. Such systems and methods can thus automatically determine whether a retinal image presents a case of malarial retinopathy which is indicative of cerebral malaria disease. The system integrates color features, textural features, morphological features, and statistical features. A partial least square (PLS) classifier is trained to distinguish retinal images with malarial retinopathy from images with no malarial retinopathy.

Description

SYSTEM AND METHODS FOR MALARIAL RETINOPATHY SCREENING CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to and the benefit of the filing of U.S. Provisional Patent Application Serial No.62/198,981, entitled "System and Methods for Malarial Retinopathy Screening", filed on July 30, 2015, and is related to U.S. Application No. 14/259,014 titled: System and Methods for Automatic Processing of Digital Retinal Images in Conjunction with an Imaging Device, filed April 22, 2014; and the specification and claims thereof are incorporated herein by reference. STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT [0002] This invention was made with government support under 1R43AI112164-01 and 2R44AI112164-03A1, awarded by the National Institutes of Health. The government has certain rights in the invention. FIELD OF THE INVENTION [0003] Embodiments of the present invention teach a method to process a retinal image in an automatic manner for detecting retinal lesions associated with Malarial retinopathy (MR). Further embodiments of the present invention teach the integration of the method with a retinal imaging camera using an embedded computing unit for implementing the methods of the present invention, displaying MR screening results to a user, and forwarding the retinal images for further evaluation by a medical expert; as a tool for improving the diagnostic accuracy for Cerebral Malaria (CM). The method of the present invention detects images with MR and presence of MR lesions comprising retinal whitening, vessel discoloration, and hemorrhages. The retinal camera comprises any imaging device that produces color images of the retina. The camera comprises computer software to assess quality of an input color image and provides feedback to a photographer if the image quality is lower than a predefined reference. One embodiment of the present invention is the computer method integrated to a low-cost, portable retinal camera, such as but not limited to, VisionQuest’s iRx-Cam. In one embodiment of the present invention th omputer method was evaluated on a representative set of 86 retinal color images from CM subjects. The MR was detected with sensitivity of 95% and specificity of 100%. INTRODUCTION [0004] Note that the following discussion may refer to a number of publications and references. Discussion of such publications herein is given for more complete background of the scientific principles and is not to be construed as an admission that such publications are prior art for patentability determination purposes. [0005] Cerebral malaria (CM) is a life-threatening clinical syndrome associated with malarial infection. The 2014 World Malaria report estimates that malaria affected over 198 million people in 2013, and claimed the lives of 584,000 people worldwide, about 75% of whom were African children under 5 years of age. Most of these mortalities were thought to be due to CM. However, Taylor et al. found in autopsies that 23% of children diagnosed with CM actually died of other untreated disease, in which cases other concurrent diseases with similar symptoms produced false positive test results. The lack of treatment of these concurrent diseases often results in death or can have a long-lasting socio-economic impact on survivors in terms of neurologic disabilities. Annually, CM results in a loss of $12 billion USD in gross domestic product (GDP) and 35.4 million disability adjusted life years due to mortality and morbidity of CM patients in Africa. Thus, a more specific diagnostic to confirm the presence of CM is critically needed to improve clinical outcomes. Lewallen et al. first reported on malarial retinopathy (MR), highly specific retinal lesions associated with CM. This discovery now offers an effective means for improving the specificity of CM diagnosis through retinal imaging. However, examining patients for signs of MR is li it d by the small number of ophthalmologists or ophthalmic experts in most regions of Africa and by the lack of accessible and affordable retinal imaging cameras. [0006] CM is caused by the Plasmodium falciparum (PF) parasite that sequesters the erythrocytes in micro-vessels of cerebral and retinal circulation, causing the appearance of retinal abnormalities of MR, visible in color retinal images. The histological features of MR present as retinal whitening and vessel discoloration which are unique to MR, are related to ischemia, and are a reflection of processes occurring in the brain. The white-centered hemorrhages correlate with the cerebral hemorrhages in number and size (^=0.8). The extent of retinal whitening and number of hemorrhages are related to the duration of coma and likelihood of death from CM. For a comatose patient who tests positive for the PF parasite, screening for MR improves the specificity of CM diagnosis from 61% (using the World Health Organization: WHO criteria) to 100% (WHO criteria + MR). Studies have reported that nce of MR has 95% sensitivity and 100% specificity in patients with severe CM cases that were autopsied. Overall, using MR to diagnose CM in a potentially fatal coma, produces a positive predictive value (PPV) of 95% and a negative predictive value of 90%, compared with a PPV of 77% when diagnosed only with parasitemia, coma and exclusion of other causes. Therefore, screening for MR will vastly improve diagnosis of CM and reduce the number of deaths due to other diseases not treated after a false diagnosis of CM. [0007] As of a current diagnostic criteria followed by most of the clinics in Africa, when a comatose patient is admitted to the hospital, a Plasmodium Falciparum (PF) parasite detection test is performed using microscopy or rapid diagnostic tests. Those tests have reported sensitivity of 98% and specificity of 93%. However, because individuals living in malaria-endemic areas can tolerate PF infections without developing symptoms, the presence of the PF parasite alone does not always reflect a causal association with coma in patients who appear to have CM; and this can lead to incorrect treatment and potentially death. CM is diagnosed clinically based on WHO criteria which consists of physical symptoms (seizures, convulsions, etc.), a Blantyre coma score =< 2, blood sample analysis for detection of PF parasite, and exclusion of other encephalopathies causing coma (e.g. meningitis, post-ictal state, hypoglycemia). However, Beare et al. (2006) reported that the specificity of the WHO criteria is only 61%, while Taylor et al. (2004) reported a specificity of 77%. [0008] Using the specificity values from the previous discussion and a CM prevalence of 46%, as reported by Reyburn et al., the positive predictive value (PPV) for CM in current clinical practice is obtained in the range of 0.68 to 0.78. In one embodiment of the present invention, a computer method is applied on color retinal images (after a positive PF parasite detection) to confirm the presence or absence f MR with a target specificity of 95% for detection of CM, which results in PPV in the range of 0.97 to 0.98. This 20- to 30-point increase in PPV means that less than one percent of the comatose patients may be incorrectly treated for CM when coma is actually due to other causes. For example, using only current diagnostic methods, out of 1,000 comatose children admitted in a hospital, 124 to 210 will be misdiagnosed with CM. By using the computer method of the present invention in addition to current clinical protocols for CM diagnosis, the number of misdiagnosed CM children can be reduced to as little as 6 per 1000. [0009] Once a clinical diagnosis of CM is made (parasitemic comatose child), parenteral anti- malarial treatment (artesunate or quinine) are administered. An important point to make is that the clinical diagnosis of CM is highly sensitive (>95%), and that an application of the present invention is to reduce the false positive rate and to increase the positive predictive value after a diagnosis for CM has already b made through clinical symptoms. Therefore, the present invention’s sensitivity to MR diagnosis is not an issue as both MR-positive (that indicates true CM) and MR-negative (that indicates non-CM disease) patients get treated for CM. However, the improved specificity to MR diagnosis plays a vital role in accurate identification of non-CM cases (MR-negative) which can then be investigated for other causes of coma. Therefore, treatment strategies for other non-malarial causes of coma such as pneumonia or bacterial meningitis can be applied, thus saving hundreds of thousands of children’s lives. [0010] To assess the presence of MR, an ophthalmologist performs indirect ophthalmoscopy. At present, the specialized equipment and human skill required for indirect ophthalmoscopy remain barriers to wider use of MR detection in clinical practice. The utilization of retinal images obtained by a desk- mounted camera provides another option; however, these cameras are expensive and bulky, and hence not easily accessible. Hand-held cameras exist, but their average cost of around $10,000 is beyond the means of many clinics and hospitals in Africa where these devices would be most effective in diagnosis of MR. An automatic device to determine the presence or absence of MR is thus needed to improve the diagnostic accuracy of CM, which can be easily accessible and affordable to the affected population in Africa. Embodiments of the present invention comprise computer implemented methods that automatically analyze the retinal color images to detect MR lesions associated with cerebral malaria, which can be integrated into a low-cost and portable image acquisition apparatus as part of an MR screening system. A further embodiment of the present invention provides automatically generated cues to the user when an image is unusable because of low quality comprising of non-uniform illumination, low contrast, and noise. These tasks are preferably performed in real time, while the patient is still being imaged, by a computing unit that executes computer implemented algorithms and is integrated with the image acquisition system, e.g. a retinal camera. [0011] The retinal camera of one embodiment of the present invention is designed to meet the clinical environment for imaging the CM affected population in Africa, including being portable and low- cost, such as but not limited to, VisionQuest’s i-RxCam. Possible modifications to the camera comprise, a wide field of view (FOV) contact lens to make the imaging process convenient for the user and the comatose child (non-cooperative patient). It would be clear to others with knowledge of this art that other modifications may be suitable within the scope of the present invention. The camera is preferably equipped with auto-image-quality feedback software that alerts a photographer on quality of captured images and if there is a need of re-imaging. Considering the technologically disadvantaged regions in Africa, an embodiment of the present invention as described herein preferably processes the retinal images using a computing processor that produces MR detection results without a need of human intervention. The device design reduces the need for high levels of training or skill and can be operated b medical technician or a nurse. Further embodiment of this invention comprises the integration of the retinal camera with a multitude of software-based retinal screening applications to detect other equally prevalent retinal diseases in Africa comprising diabetic retinopathy (DR), age-related macular degeneration, and glaucoma; making it a highly cost-effective solution. [0012] An embodiment of the present invention comprises the first fully automated software for comprehensive analysis of MR abnormalities and their statistically optimal combination to detect presence/absence of MR with high accuracy. This also eliminates the challenges to train healthcare staff to perform ophthalmoscopy and interpret retinal images. The method is tuned to yield high specificity, addressing the current clinical requirement to prevent deaths resulting from over-diagnosis of CM. [0013] An embodiment of the present invention is designed to address the current clinical needs in Africa: 1) low-cost, portable retinal imaging device (retinal camera) affordable and accessible to the targeted population, 2) retinal camera designed with a wide field of view lens for detecting unique MR lesions, 3) fully automated software based MR detection using a portable processing unit, 4) Easily adaptable in clinical settings in Africa for the target users such as medical technicians, nurses, and doctors working at a clinical facility for CM diagnosis. Additionally, an embodiment of the present invention is an important tool to clinical investigators, epidemiologists, and policy makers by making it possible to track the incidence of "true CM" for directing the malaria control programs to make the most economic impact of improving the healthcare in Africa. [0014] Based on a publication database, no prior art has been disclosed for a hand-held retinal i ing device with an automatic system to detect MR based on retinal color images. A PubMed search on“Malaria Retinopathy Detection using retinal color images” and similar terms returned only two papers outside of the ones written by the inventor (V Joshi et al.) for MR hemorrhage detection. In the first paper, Saleem et al. teach methods for hemorrhage detection in MR using retinal color images, whereas, in the second paper, Ashraf et al. teach a method to detect both macular whitening and retinal hemorrhages. However, both the methods report the test results on images with diabetic retinopathy and not MR images. A search of the US patent office did not return any results of patents related to MR. [0015] A few publications have reported MR detection methods based on the analysis of fluorescein angiography images. Zhao et al. teach a method for detection of vascular leakage in fluorescein angiogram images, which corresponds to MR whitening. The authors used graph methods and saliency maps to detect the leakage location. Another paper by Zhao et al. teaches an automated thod for detection MR vascular abnormalities such as intravascular filling defect, using fluorescein angiogram images. [0016] An aspect of one embodiment of the present invention comprises an automated method for MR detection; a software for detection of MR lesions and an MR detection model that classifies each patient-case into MR or no-MR categories. The method is preferably integrated with low-cost, easy to use and portable retinal imaging camera. The users of a preferred embodiment of the present invention comprise medical technicians, nurses, and doctors working at a clinical facility providing diagnosis and treatment of patients with symptoms of CM. The use of an embodiment of the present invention improves the positive predictive power for detection of CM and thus alert caregivers to look for other conditions that cause coma when MR is not present, thus significantly reducing the current rate of CM misdiagnosis. BRIEF DESCRIPTION OF INVENTION. [0017] One embodiment provides for a method to perform automatic malarial retinopathy detection wherein a retina is illuminated using an illumination source and capturing an image of the retina with a retinal camera. The retinal image is transmitted to a processor where the processor performs an assessment in real time of the image quality wherein image quality is determined by one or more of the image quality analysis steps of: 1) determining alignment of the image; 2) determining presence and extent of crescents and shadows in the image; and 3) determining quantitative image quality of the image via a classification process trained using examples of visual perception by human experts. If the image quality does not meet a predetermined quality requirement, the camera is adjusted. The adjusting step may employ a user interface to indicate a user quality of the image and suggested actions to take with ct to the camera. The performing an image quality assessment step may classify the image according to a set of image quality labels. An image quality descriptive label may be assigned to an image. An image that meets the predetermined image quality requirement is further processed by the processor for detection in real time of Malarial retinopathy in the retinal image wherein Malarial retinopathy is determined by one or more of the detection of Malarial retinopathy steps of: 1) determining the presence of retinal whitening in the image; 2) determining the presence of hemorrhages in the image; 3) determining the presence of vessel discoloration in the image; and 4) determining quantitative likelihood of presence of Malarial retinopathy in the image via a classification process trained using examples of visual perception of malarial retinopathy by human experts. The performing a Malarial retinopathy detection step may transform the image information into a plurality of color spaces, color, texture, statistical features or any combination thereof. The performing step may group a plurality of features into a plurality of feature vectors. For example, the performing step determines presence and t t of retinal whitening in the image according to one or more clinical protocols. Further, the performing step may determine the presence and extent of hemorrhages in the image according to one or more clinical protocols. In one embodiment the performing step uses a retinal camera with optical contact lens with wide field of view for example greater than 120-degrees but not limited thereto for in some embodiments a field of view of between 50-100 degree and/or 100-120 degree wide field of view can be used. [0018] When the performing a Malarial retinopathy detection step determines presence and extent of retinal whitening in the image, the executing step may employ a reduction of camera reflex in the image using color and textural features to distinguish between the reflex and true whitening and/or the executing step may additionally employ color, textural, morphological, and statistical feature extraction steps to form the feature vectors. The executing step may additionally comprise assigning a descriptive label to the image as to presence and extent of retinal whitening according to one or more clinical protocols. For example, the executing step may employ a threshold to assign labels to the image. Further, the executing step may additionally comprise tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity. Further, the executing step additionally employs color, difference of Gaussians, contrast, and morphological feature extraction phase to form the feature vectors. [0019] When the performing a Malarial retinopathy detection step determines the presence and extent of hemorrhages in the image according to one or more clinical protocols, the executing step additionally employs color, difference of Gaussians, contrast, and morphological feature extraction phase to form the feature vectors and/or additionally comprises assigning a descriptive label to the image as to nce and extent of hemorrhages according to one or more clinical protocols. The executing step may employ a threshold to assign labels to the image. The executing step may additionally comprise tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity. Additionally, a descriptive label may be assigned to the image as to presence and extent of hemorrhages according to one or more clinical protocols. For example, the executing step employs a threshold to assign labels to the image. Further still, the executing step additionally comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity. [0020] When the performing a Malarial retinopathy detection step determines presence and extent of vessel discoloration in the image according to one or more clinical protocols, the executing step additionally employs color, intensity gradient, and statistical feature extraction phase to form the feature t rs. Further still, the executing step additionally employs a feature reduction phase to reduce number of features. Additionally a descriptive label is assigned to the image as to presence and extent of vessel discoloration according to one or more clinical protocols. For example, the executing step employs a threshold to assign labels to the image. Additionally, the executing step comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity. Further the step of determining quantitative likelihood of presence of Malarial retinopathy in the image via a classification process trained using examples of visual perception of malarial retinopathy by human experts is in real time. Thereafter, assigning a label to the image indicative of the presence or absence of Malarial retinopathy according to the likelihood value of the output. Following real time determination of quantitative likelihood of presence of Malarial retinopathy in the image by human graders, the executing step comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity. According to one embodiment, determining quantitative likelihood of presence of Malarial retinopathy in the retinal image via a classification process trained using examples of visual perception of Malarial retinopathy by human experts further comprises a high specificity classification model and/or a high sensitivity classification model based upon quantitative result. Further still a label is assigned to the image indicative of the presence or absence of Malarial retinopathy according to the likelihood value of the output. [0021] According to another embodiment a system is provided to perform automatic retinal screening, said system comprising an illumination source illuminating a retina; a retinal camera capturing a retinal image; a processor that receives the image and performs the following steps on the received image: performing an assessment in real time of the image quality, adjusting a setting of the camera if the i e quality does not meet predetermined quality requirements, and performing an assessment in real time of the presence or absence of Malarial retinopathy.
^
[0022] Another embodiment provides a method to automatically determine presence or absence of malarial retinopathy in a retinal image, the method comprising the steps of illuminating a retina using an illumination source. The retinal image is captured with a retinal camera. The image is transmitted to a processor and via the processor an assessment in real time of the image quality is performed by assessing in real time of the presence or absence of Malarial retinopathy with one or more of the following steps of detecting the presence or absence of retinal whitening; detecting the presence or absence of hemorrhages; detecting the presence or absence of vessel discoloration; and detecting the presence or absence of malarial retinopathy via a classification process trained using example images containing retinal lesions of malarial retinopathy. [0023] One embodiment of the present invention comprises a computer method to conduct real time analysis of the likelihood of presence or absence of MR in the retina of the image. Further a label is assigned to the image or set of images indicative of the likelihood. An assessment of images in real time is conducted via a computer processor using three methods for the detection of retinal lesions associated with malarial retinopathy: retinal whitening, white-centered hemorrhages, and vessel discoloration. For example, the performing step determines if retinal lesion such as retinal whitening is present, according to one or more clinical protocols. Further a descriptive label is assigned to the image whether or not it shows retinal whitening, according to one or more clinical protocols. In another example the performing step determines presence and extent of hemorrhages and/or vessel discoloration in the image using image analysis and feature based classification of images. Further a descriptive label is assigned to the image as to presence and extent of hemorrhages and/or vessel discoloration. The performing step may additionally employ a feature extraction step to group color, intensity, morphology, texture, and other distinguishing feature components into feature vectors. Further, still the executing step may additionally employ a feature reduction step to reduce the number of features. Alternatively, the executing step additionally comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity. For example, the executing step employs a threshold to assign labels to the image or set of images. For example, in one embodiment of the present invention, the performing step comprises a high sensitivity classification model. In another embodiment, the performing step comprises a high specificity classification model. [0024] In yet another example, the performing step determines a quantitative index describing the probability of presence of malarial retinopathy in the image via a classification process trained using examples of images labeled by human experts for presence or absence of malarial retinopathy. Further a descriptive label is assigned to the image for presence or absence of malarial retinopathy (binary classification). According to another example the performing step classifies the image according to a set of labels describing presence or absence of malarial retinopathy. [0025] In yet another embodiment, a system to perform automatic retinal screening for malarial retinopathy is disclosed having an illumination source illuminating a retina; a retinal camera capturing a retinal image; a processor receiving the image, performing an assessment in real time of the quality of image, performing an assessment in real time of the presence or absence of malarial retinopathy, and taking further investigative or treatment actions according to the results presented by the system. [0026] In a further embodiment, a method to automatically determine presence or absence of malarial retinopathy in a retinal image is provided. The method comprising the steps of illuminating a retina using an illumination source; capturing the retinal image with a retinal camera; transmitting the image to a processor; performing via the processor an assessment in real time of the quality of image, performing via the processor an assessment in real time of the presence or absence of malarial retinopathy, comprising determining presence of retinal whitening lesion according to one or more clinical protocols; determining presence and extent of hemorrhages and/or vessel discoloration in the image; and determining quantitative index describing a probability of presence of malarial retinopathy in the image via a classification process trained using examples of images labeled by human experts for presence or absence of malarial retinopathy; and determining investigative diagnosis or treatment strategies according to the results presented. [0027] The adjusting step may employ a user interface to indicate to a user the presence or absence of malarial retinopathy in the image and suggested actions to take with respect to the investigative diagnosis or determination of a treatment strategy. [0028] One embodiment of the present invention comprises a retinal camera that can be used for imaging comatose children. The retinal camera may be modified to suit the clinical requirements, with components comprising, an optical contact lens with wide field of view (>120 degrees) and a processor that forms an interface between algorithms system and retinal camera. [0029] One or more embodiments of the present invention overcome barriers to adoption of ti l imaging at the primary care and hospital settings by the combination of a low-cost, portable, easy to use retinal camera and software-based methods to detect the presence of malarial retinopathy. Moreover, determination of input retinal image quality is preferably provided in real time to the person acquiring the images, i.e. the user, before image acquisition, i.e. during alignment of the camera to the patient’s eye. The detection of malarial retinopathy using software-based methods preferably comprises descriptive labels that indicate the types of retinal lesions associated with MR that guide the user on determining the presence and/or severity of MR and risk of CM. The embodiments described herein, when using real time retinal image screening, provide a level of effectiveness and efficiency not available in prior art systems. [0030] An embodiment of the present invention relates to a system and method for acquiring retinal images for automatic screening by machine-coded algorithms running on a computational unit. Th mbodiment comprises a method that allows a user to use a retinal camera to record retinal images with adequate image quality according to certain rules that assess the quality of the images for further machine-based or human-based processing and/or clinical assessment. A retinal camera of this embodiment preferably comprises a contact lens with wide FOV (>120-degrees) that can allow for imaging of central and peripheral retina using no more than three shots. Further, another embodiment of the present invention describes a machine-based method to detect the presence of retinal lesions associated with MR and presence of MR in retinal images. MR detection methods of the present invention comprise detection of retinal lesions, for example, retinal whitening, hemorrhages, and vessel discoloration. Another embodiment of the present invention describes a machine-based method for integration of detection of three lesions to assess the likelihood of presence of MR. A set of thresholds on the likelihood values can be determined so that images can preferably be assigned MR presence- associated labels such as“MR present” or“MR absent”, and others as explained herein. MR lesion detection methods of the present invention comprise machine-based transformations that can be carried out onboard a computing device integrated directly into a retinal camera. The present invention also relates to methods to assign quantitative probability values or labels to sets of one of more retinal images related to the likelihood of the presence of MR as determined by machine code. [0031] Objects, advantages, novel features, and further scope of applicability of the present invention will be set forth in part in the detailed description to follow, taken in conjunction with the accompanying drawings, and in part will become apparent to those skilled in the art upon examination of the following, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS [0032] The accompanying drawings, which are incorporated into and form a part of the specification, illustrate an embodiment of the present invention and, together with the description, serve to explain the principles of the invention. The drawings are only for the purpose of illustrating various embodiments of the invention and are not to be construed as limiting the invention. In the drawings: ^
[0033] FIG. 1 illustrates a flow of images, data, and analysis according to one embodiment of the present invention for image quality and malarial retinopathy detection.
^
[0034] FIG.2 depicts a more detailed view of the processes in FIG.1. [0035] FIG. 3(A) depicts an embodiment of the system of the present invention comprising a retinal camera with embedded computing unit module, chin rest, camera base, and external fixation light. FIG. 3(B) depicts an alternate embodiment of the system of the present invention comprising a retinal camera with embedded computing unit module, chin rest, and wall-mounted articulated camera mount. [0036] FIG. 4(A-D) illustrates a set of sample images adapted from digital retinal images of adequate quality of a subject with advanced diabetic retinopathy. FIG. 4(A) is an illustration of an image taken with the retinal camera of FIG. 3. FIG. 4(B) is an illustration of an image taken with a Canon CR 1 Mark II desktop retinal camera, which also shows retinal vessels (401) and an optic disc (402). FIGS. 4(C) and 4(D) show the details of micro-aneurysms (403), taken with the retinal camera of FIG. 3 and Canon CR 1 Mark II desktop retinal camera, respectively. The images from both cameras show the same level of detail and are adequate for further processing or evaluation.
^
[0037] FIG. 5 illustrates a design of a contact lens with wide field of view into the retinal camera of FIG. 3. FIG. 5(A) shows the optical lens system, and FIG. 5(B) shows the contact lens design inside the retinal camera of FIG.3, along with the light ray tracing (dashed lines). [0038] FIG. 6 illustrates a set of sample images of an artificial eye. FIG. 6(A) is an image taken with the retinal camera of FIG. 3 without a wide field of view (FOV) lens, which shows retinal structures such as retinal vessels (601), optic disc (602), and retinal lesions (603). FIG. 6(B) is an image taken with the retinal camera of FIG. 3 with a wide FOV lens, which shows the same retinal structures: retinal vessels, optic disc, and retinal lesions. The images show the effectiveness of using a wide FOV lens. [0039] FIG. 7 illustrates a set of sample retinal images adapted from a digital retinal image of a subject with malarial retinopathy taken with a retinal camera. FIG. 7(A) is an image showing an arrow pointing towards the retinal whitening (701). FIG. 7(B) is an image showing the arrows pointing towards the retinal hemorrhages (702). FIG. 7(C) is an image showing the arrows pointing towards the vessel discoloration (703). The images show an adequate level of detail required for further processing or evaluation.
^
[0040] FIG.8 depicts an embodiment of the steps for the whitening detection phase 410. [0041] FIG. 9 depicts an example image showing annotation of retinal whitening (dense striped pattern) and annotation of camera reflex (sparse striped pattern). [0042] FIG. 10(A) depicts an example image showing retinal whitening as a dense striped pattern (1001). FIG. 10(B) depicts a ground truth annotation (1002) for whitening in image in FIG. 10(A). FIG. 10(C) depicts a likelihood map generated by the algorithm (1003) for detection of whitening in FIG. 10(A), using whitening detection step 410. [0043] FIG.11 depicts the steps of hemorrhage detection step 420 according to one embodiment of the present invention. [0044] FIG. 12(A) depicts an example image showing hemorrhages (1201). FIG. 12(B) depicts a ground truth annotation (1202) for hemorrhages in image in FIG. 12(A). FIG. 12(C) depicts a likelihood map generated by the algorithm (1203) for detection of hemorrhages in FIG. 12(A), using hemorrhage detection step 420. [0045] FIG.13 depicts the steps for vessel discoloration detection phase 430. [0046] FIG. 14(A) depicts an example image showing vessel discoloration (1401) and other retinal structures such as optic disc (1405) and normal retinal vessels (1406). FIG.14(B) depicts a ground truth annotation as dashed lines (1402) for vessel discoloration in image in FIG.14(A). FIG.14(C) depicts the discolored vessels detected (dotted lines: 1404) or missed (dashed-dotted lines: 1403) by the algorithm for detection of vessel discoloration in FIG. 14(A), using vessel discoloration detection phase 430. [0047] FIG.15 depicts the steps of Malarial Retinopathy detection phase 450 according to one embodiment of the present invention. [0048] FIG. 16(A) depicts an example image from the dataset showing an optic disc (1601) and the MR lesions. FIG. 16(B) depicts a ground truth annotation for MR lesions in image in FIG. 16(A). The whitening is marked, as dotted-striped pattern (1602), hemorrhages as solid striped pattern (1603), and vessel discoloration as dashed lines (1604). [0049] FIG.17 depicts elements in a computing unit 600 according to one embodiment of the present invention. [0050] FIG.18 depicts elements in an Image quality analyzer 300 according to one embodiment f th present invention. [0051] FIG.19 depicts computing unit processing 600 using a system on a chip (SOC) 650 according to one embodiment of the present invention. [0052] FIG.20 depicts computing unit processing 600 using a single board computer (SBC) 640 according to one embodiment of the present invention. [0053] FIG.21 depicts computing unit processing 600 using a personal computer (PC) 660 according to one embodiment of the present invention. [0054] FIG.22 depicts computing unit processing 600 using cloud computing 670 according to one embodiment of the present invention. [0055] FIG.23 depicts computing unit processing 600 using a central server 680 according to one embodiment of the present invention. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION [0056] An embodiment of the present invention provides retinal screening to human patients at risk of malarial retinopathy, but the machine-based methods can be modified to screen for retinal diseases comprising, diabetic retinopathy, macular degeneration, cataracts, and glaucoma. In one f red application of the present invention, a user uses an embodiment of the present invention to record images of the fundus of adequate image quality to allow further methods to assign labels to the images in relation to the likelihood of the presence of malarial retinopathy. In another preferred application of the present invention, further methods assign labels such as“MR present” or“MR absent”, or“Indeterminate” to sets of acquired fundus images. Those fundus images labeled“MR present” or “Indeterminate” can be forwarded to a reading center for visual inspection by human experts according to telemedicine methods or to a cerebral malaria expert for further diagnostic investigation or treatment. Other applications of the present invention will become obvious to those skilled in the art. [0057] Embodiments of the present invention enable users to obtain and record retinal images of adequate image quality to allow further methods to execute machine-based and/or human-based clinical assessment. Fundus cameras have the shortcoming of not comprising real time feedback on image lity and therefore they are of limited clinical utility. Traditional fundus cameras may comprise visual aids of focus and working distance but these aids have the shortcoming of not providing information to the user on other image quality features comprising field of view, illumination and alignment artifacts, media opacities, debris in the optical path, and others that will become obvious to others skilled in the art. Thus the present invention fills an unmet need for a system and methods that ensures adequate quality images at the point of care, thus eliminating the need for re-scheduling and re-imaging of a patient because of inadequate quality images. None of the fundus cameras has an automatic MR screening system that is needed in the malaria affected countries where immediate care is required but accurate diagnosis by a specialist is not easily available. Embodiments of present invention meet this need with its integrated machine-based methods for automatic MR/no-MR determination. [0058] In the embodiments of the present invention described herein a retinal camera and imaging protocol are described as non-limiting examples. It will become obvious to those skilled in the art that other retinal cameras and other imaging protocols can be employed without affecting the applicability and utility of the present invention. [0059] Referring now to FIG. 1, a flow chart showing phases of automatic processing of retinal images in conjunction with an imaging device in accordance with an embodiment of the present invention is illustrated. Automatic processing is defined as assessing the quality of the retinal images, providing feedback to a user regarding said image quality, and assessing the likelihood of presence of malarial retinopathy. The entire automatic processing step can be completed within 3 minutes when the software algorithms are optimized for speed and run on computationally efficient machines, whereas a human grader would take up to 20-30 minutes for assessment and detection of malarial retinopathy and could t ocess the retinal image in the time frame of the present invention. The retinal camera is integrated with MR detection algorithms to process images captured at the point of care in near real-time. The retinal imaging process begins with the imaging target 100, either an artificial eye or a human subject clinically diagnosed with CM, in partially or fully comatose state. The imaging target can take many forms including a human eye or an artificial target simulating a human eye. Other imaging targets will be known to those skilled in the art. [0060] The next step of an embodiment of the retinal imaging process is the image acquisition using a retinal camera 200. One or more embodiments of the present invention include methods to optimize retinal image quality using selection of alignment, focus, and imaging wavelengths. Image acquisition can take many forms including, capturing an image with a digital camera and capturing an image or a series of images as in video with a digital video camera. Retinal cameras typically employ two t in acquiring retinal images. In the first step an alignment illumination source is used to align the retinal camera to the area of the retina that is to be imaged. In mydriatic retinal cameras this alignment illumination source can comprise a visible source, while in non-mydriatic retinal cameras a near infrared illumination source that does not affect the natural dilation of the pupil is preferably used. The second step is image acquisition, which comprises a visible illumination source driven as a flash or intense light. This flash of light is intense enough that allows the retinal camera to capture an image and short enough to avoid imaging the movements of the eye. [0061] Another embodiment of the present invention is a retinal camera that uses near-infrared light for the alignment illumination source and generates a live, i.e. real time, video signal of the parts of the retina to be imaged. Other methods for capturing and acquiring retinal images will be known to those skilled in the art. The retinal camera preferably can capture at least one longitudinal point in time. This phase may utilize techniques and devices disclosed in“Portable retinal imaging for eye disease screening using a consumer-grade digital camera” reported by Barriga et al. in the Proceedings of Photonics West BIOS, San Francisco, CA, 2012; 2)“Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment” reported by Soliz et al. in the Proceedings of Photonics West BIOS, Proceedings Vol. 7550, San Francisco, CA, 2010; 3) "Application of adaptive optics in retinal imaging: A quantitative and clinical comparison with standard cameras," reported by Barriga et al. in the SPIE's Photonics West 2005, San Jose, California, 2005. [0062] The retinal imaging processing further utilizes the Image Quality Analyzer 300 according to one embodiment. In this embodiment, the video signal of the retina is preferably automatically analyzed according to one of more imaging protocols and image quality criteria to assess whether an image is in ment with an imaging protocol, and the extent of such agreement. The video signal of the retina is further analyzed for detectable image quality defects, and the extent of such defects. On the basis of this analysis the video signal and one or more visual aids are preferably shown to the photographer to guide the photographer on changes to the camera alignment that may result in improved image quality. Once a retinal image is acquired, such image is preferably automatically analyzed according to imaging protocol and/or image quality criteria to detect whether such image is in agreement with the imaging protocol, includes detectable image quality defects, and the extent of such agreement and defects. On the basis of this analysis the acquired retinal image is preferably assigned to a class indicative of its clinical utility, the extent and nature of detected image quality defects, and whether such image should be retaken. If after several tries the photographer is unable to obtain a good quality image, an output of “inadequate” is returned and a recommendation for indirect ophthalmoscopy is given. This step may utilize techniques and devices disclosed in U.S. Application No.14/259,014 [0063] The next step of the retinal imaging process utilizes the Malarial Retinopathy Detection 400 according to one embodiment. In this embodiment one or more retinal images are automatically analyzed according to one or more malarial retinopathy risk assessment criteria to detect whether such image or set of images include signs of presence of malarial retinopathy. On the basis of this analysis the image or set of images are preferably assigned to a class indicative of the presence of malarial retinopathy. The set of classes may include a class corresponding to an image that cannot be analyzed due to inadequate image quality. [0064] A further step of the retinal imaging process involves Reading Center 500 according to one embodiment of the present invention. In this embodiment a retinal image or a set of retinal images labeled as“Malarial Retinopathy present” as described below are visually analyzed by human experts according to one or more Malarial retinopathy classification criteria to detect whether such image or set of images include detectable MR lesions and/or features and the extent of such lesions and/or features, associated with MR. On the basis of this analysis the image or set of images are assigned to a class indicative of the presence of MR, the extent of disease, and guidance for follow up care. [0065] The steps comprising Image Quality Analyzer 300 and MR detection phase 400 are preferably implemented by machine-coded mathematical algorithms that run on Computing Unit 600. The Computing Unit 600 preferably comprises one or more types of random-access and read-only memory, one or more types of processors, and one or more types of human interface units including but not limited to keyboard, touch screen, display, and light indicators. The Computing Unit 600 carries out one or more real time image transformations, analyzers, and classifications according to mathematical algorithms in sentations outside the space and time representations understood by the human brain and at processing speeds superior and/or faster to what human brains can do. [0066] FIG. 2 is a detailed view of the retinal imaging process. The Image Target phase 100 comprises the object to be imaged. There are many objects that can be imaged. The preferred imaging target is a human eye 110 with a pupil size of at least 3.7mm and clear media, e.g. no cataracts. An alternative imaging target is an artificial eye 120 comprising one or more optical apertures simulating a human pupil, an internal spherical cavity simulating the interior of the human eye, and an internal surface that can simulate the physiological features of the human retinal. The internal cavity of an artificial eye can also be made of a material with reflectivity characteristics that can be used to characterize and/or calibrate a retinal camera. Other alternative imaging targets will be known to those skilled in the art. [ ] Retinal Camera 200 is used to obtain and record digital images of the imaging target 100 according to one embodiment of the present invention. An embodiment of retinal camera 200 is an imaging sensor that can capture the light reflected from imaging target 100. In one embodiment of the present invention said imaging sensor can be a commercial camera such as a digital single lens reflex (DSLR) camera 210. The main advantage in using a commercial DSLR is that these devices comprise imaging sensors and associated hardware and firmware that simplify the imaging and acquisition of retinal images. The optical element of Retinal Camera 200 that enables imaging the fundus of the eye is main lens 220. In one embodiment of the present invention the main lens is a +28 Diopter indirect ophthalmoscopy lens which results in an effective field of view of approximately 30 degrees on the retina. In an alternative embodiment a +40 Diopter main lens results in an effective field of view of about 45 degrees on the retina. In a further embodiment a contact lens with wide field of view results in an effective field of view of greater than 120 degrees on the retina. The system optionally comprises a mechanism to switch or append between two or more main lenses in order to change the effective field of view of the retinal camera. Other methods to use and combine other types of main lenses will be known to those skilled in the art. [0068] In one embodiment of the present invention multiple illumination sources, including a near infrared source 230 and one or more (preferably at least two) sources of visible light 240 are preferably combined by a dichroic beam splitter, i.e. a hot-mirror, in order to simplify the design and construction of the retinal camera. The illumination sources preferably comprise LEDs. The near infrared source 230 is preferably used to align the camera to the area of the fundus to be imaged without affecting the natural dilation of the subject’s pupil. Once alignment is achieved, the one or more visible light sources 240 are commanded to emit a flash of light that illuminates the retina momentarily and allows the l camera to capture an image of the retina. The wavelength and illumination intensity of the visible sources 240 are preferably selected to optimize the contrast offered by the characteristic reflectance and absorption of the natural chromophores in the retina. In this way illumination entering the pupil is minimized while captured reflected light is maximized adding to the comfort of the subject and reducing the effect of the light on the subject’s natural pupil dilation. The near infrared illumination source 230 may be mounted on a passive heat sink. The visible light illumination sources 240 may be mounted on an aluminum plate and operated in pulsed mode (20ms-1s). The typical high spherical aberration of main lens 210 preferably removes non-uniformity of LED illumination sources. [0069] In one embodiment of the present invention, the retinal camera 200 comprises a near infrared illumination source 230 with a preferable central wavelength of 850nm (CW). At this wavelength, the light penetrates the different layers of the retina up to the choroid, and alignment of the camera is using this layer. In another embodiment of the present invention, the retinal camera 200 comprises a near infrared illumination source 230 with a preferable central wavelength of 760nm (CW), which enables the camera to be aligned against inner layers of the retina such as the retinal vasculature. In yet another embodiment of the present invention, the retinal camera 200 comprises a near infrared illumination source 230 comprising selectable central wavelengths, e.g. between 760nm and 850nm, in order to align the camera against retinal features on different layers of the retina. Other methods to illuminate the retina for alignment will be known to those skilled in the art.
^
[0070] In one embodiment of the present invention, the retinal camera 200 comprises a white light source such as an LED used as the visible light illumination source 240. The color temperature of the white light LED produces the color balance in the resulting retinal image. In yet another embodiment of the present invention, the retinal camera 200 comprises a method to select among two or more color temperatures of the white LED illumination source 240 in order to obtain different degrees of color balance among retinal features. For example, retinal vasculature is more prominent in the green channel whereas differences in retinal pigmentation due to melanin may call for different color temperatures in order to compensate for these differences. Yet in another embodiment of the present invention, the retinal camera 200 comprises a multi-color illumination source 240, such as a tri-color LED illumination source and a method to change the intensity of each of the multiple colors in order to optimize color balance in acquired retinal images. Other methods to use and combine other types of visible light illumination sources will be known to those skilled in the art. [0071] In one embodiment of the present invention, the retinal camera 200 comprises a beamsplitter 250 to allow light to be directed to the retina and then captured by the image detector 210. e embodiment of the present invention, the retinal camera 200 comprises a nano-wire polarizer beamsplitter 250. A nano-wire polarizer beamsplitter combines two roles in one single device: a beamsplitter to allow light to be directed to the retina and then captured back at the image detector and a polarizer to minimize back reflections from other optical components and the eye’s cornea as well as permitting using a small central obscuration. The use of a nano-wire polarizer beamsplitter makes the retinal camera 200 easier and cheaper to manufacture and calibrate compared to other devices that use separate components for beam splitting and polarization. [0072] FIG. 3(A) illustrates an example of retinal camera 200 assembly that can be used to provide spatial stabilization between the imaging target 100 and the retinal camera 200. This example comprises chin rest 280 and camera mount 290. In one embodiment of the present invention, the retinal camera 200 comprises an external fixation light 282 attached to the chin rest to allow precise control of ubject’s gaze in order to image different areas of the fundus. FIG. 3(B) illustrates an example of retinal camera 200 that comprises a wall-mounted camera base 292 to enable relative stabilization between the imaging target 100 and the retinal camera 200. FIGS. 4(A) and 4(B) are example images of adequate quality acquired with two different retinal cameras from two diabetic individuals presenting different stages of diabetic retinopathy. [0073] In one embodiment of the present invention, the retinal camera 200, comprises a lens with 40-degree field of view (FOV). Using said embodiment, a photographer would take multiple shots to cover central and peripheral retinal areas of interest for MR detection. Another embodiment of the present invention comprises a retinal camera 200 that uses an optical contact lens 260 with wide FOV to achieve greater than 120-degree instantaneous field of view (FOV) in pediatric eyes in order to capture in a single image as much of the retina and periphery as possible and simplify the healthcare staff task of capturing suitable images. FIG. 5(A) shows the optical design of the contact lens in conjunction with other lenses and FIG.5(B) shows the layout with a position of the contact lens inside the retinal camera 200. [0074] In one embodiment of the present invention, the retinal camera 200, comprises an optical contact lens 260 with wide FOV such as Mainster-PRP 165 optical lens (Ocular Instruments) producing an FOV of 165-degrees. The resulting optical ray tracing of the design is shown in FIGS. 5(A) and 5(B). The wide FOV optics preferably consists of a first element in contact with the cornea made of poly methyl methacrylate bonded to a high refractive index glass contact lens. The high power asphere, as the foreoptics, produces a wide field aerial image of retina that is captured by the camera imaging system. Increasing the FOV requires using a contact lens, but does not increase the size of the camera. Introducing a contact lens into the optical path brings the camera directly in contact with patient’s cornea d helps in keeping patient’s eyelids open and stable. The photographer can maintain a stable imaging posture and change the viewing angles easily, reducing the artifacts due to patient and camera movements. The increase in FOV preferably requires using a larger sensor 210, such as one with 16- megapixels to maintain a small pixel footprint on the retina. In the present embodiment, the optical design preferably presents with a pixel footprint resolution of ~5 microns, which may be sufficient to detect the smallest retinal feature needed for MR lesion detection, which is the width of the smallest retinal vessel wall, a feature that measures ~15 microns.
^
[0075] In one embodiment of the present invention, the retinal camera 200, comprises a contact lens 260 with wide FOV such as Mainster-PRP 165 integrated into the retinal camera. FIGS. 6(A) and 6(B) show the example pilot images of an artificial eye captured without (FOV ~32^) and with the wide FOV lens 260 (FOV 165^), respectively. For comparison, a yellow dashed-circle in image in FIG. 6(B) s the 32^ FOV captured by the camera without the wide FOV lens 260. For reference, FIG. 6(A) shows retinal structures such as retinal vessels (601), optic disc (602), and retinal lesion (603). [0076] The retinal camera 200 preferably further comprises an electronic transmission system to transmit images or video feed 270 of the areas of the retina illuminated by the near infrared illumination source 230 and the visible illumination sources 240 to the next phase of the Retinal Imaging Process. Example systems can include wired transmission according to the HDMI standard, wireless transmission according to the Wi-Fi standard, and solid state memory media such as Secure Digital cards. Other transmission methods will be known to those skilled in the art. [0077] Referring to FIG. 2, Image quality analyzer 300 and Malarial Retinopathy Detection system 400 preferably perform real time and automatic assessment of retinal images transmitted by the transmission system 270. The Image quality analysis 300 and MR detection process 400 preferably use one or more methods implemented as machine-coded algorithms and running on a computing unit 600 comprising one or more types of host memory 610, one or more types of host processors 620, and one or more types of graphics processing units 630, as shown in FIG. 17. Examples of the methods used in the Image quality analysis 300 and MR detection process 400 include those found in scientific calculation software tools such as Matlab and its associated toolboxes. Other scientific calculation software tools will be known to those skilled in the art. [0078] In one embodiment of the present invention, the Image quality analysis 300 and MR detection process 400 preferably use one or more methods implemented as machine-coded algorithms and running on a computing unit 600 comprising an embedded computer such as a system on a chip (SOC) 650 such as the one shown in FIG. 19, that can be integrated with the retinal camera 200. In one embodiment, the embedded computer comprises hardware such as the ODROID-U3 processing board, a Linux embedded computer with a 1.7 GHz, Cortex-A9 Quad-Core processor, 2 GB RAM memory, common I/O ports (HDMI, USB, Ethernet, Audio), and 2.2 inch, 240 x 320 TFT-LCD display. This portable processing board can preferably show the images being processed and the results of the processing. The machine-coded algorithms for MR detection system 400 are preferably ported to C/C++ and then implemented on the processing board. The processing board is of sufficiently small size (3”-by-2.5”) to be integrated with the retinal camera without the need for extensive modifications or increase in system size. The processor is equipped with Ethernet, GPIO, and USB ports that can be utilized for communication with the camera for automatic download of images as well as with a central computer for storing results. [0079] Referring to FIG. 2, Image Quality Analyzer 300 preferably performs real time and t matic assessment of retinal images transmitted by the transmission system 270. As shown in FIG. 18, the Image Preparation process 310 preferably comprises one or more image processing steps such as decoding of video into images, image resizing, image enhancement, and color channel transformation. The Image Preparation process 310 preferably uses one or more methods implemented as machine- coded algorithms and running on a computing unit 600 comprising one or more types of host memory 610, one or more types of host processors 620, and one or more types of graphics processing units 630. Examples of the methods used in the Image Preparation process 310 include those found in scientific calculation software tools such as Matlab and its associated toolboxes. [0080] Referring to FIG. 18, the Analyze Image Quality phase 340 comprises one or more tools or applications to assess one or more image quality characteristics in retinal images. Said tools or applications are preferably implemented as machine-coded algorithms running in a computing unit 600. In an embodiment of the Retinal Screening process, the Analyze Image Quality phase 340 determines retinal image alignment, the presence and extent of crescents and shadows, and a measure of quantitative image quality according to expert visual perception. The results of the Alignment, Crescent and Shadow, and Quantitative Image Quality phases are preferably passed to the Classification phase 350 for further processing. The Image Quality Analyzer phase 300 may utilize techniques and devices disclosed in U.S. Application No.14/259,014. [0081] The Alignment phase preferably automatically determines whether a retinal image complies with an imaging protocol. If a retinal image does not comply with said imaging protocol the Alignment phase determines that said image is misaligned. An embodiment of the imaging protocol uses one retinal image with the optic disc in the center of the image (herein referred to as F1) and image with th f vea in the center of the image (herein referred to as F2). The Alignment phase preferably uses one or more image processing steps implemented as machine-code algorithms running in computing unit 600, and preferably comprises the steps of field detection, macula detection, right/left eye detection, and alignment check.
^
[0082] The Field Detection phase preferably uses one of more machine-coded algorithms running in a computing unit 600 to determine whether a retinal image is F1 or F2. The Field Detection phase determines the location of the optic disc and relates this location to areas on the retinal image where the optic disc should be located for F1 images and F2 images. A region of interest with respect to the field of view (FOV) center is defined. The region consists of a vertical band with width equal to two optic disc (OD) diameters. If the automatically detected optic disc location falls inside of this band, then the image is classified as F1. If the OD falls outside this band, it may not be a F1 image since it is farther from the FOV center, relative to macula. If the OD is outside of the band on either side, there is a higher probability of macula being inside the band on the other side (given that the macula is approximately 2 optic disc diameters away from the OD, and the width of the band is 2 OD). Thus the macula is closer to the FOV center relative to the OD, and therefore it is a F2 image.
^^
[0083] Macula Detection step uses one or more machine-coded algorithms running in a computing unit 600 to determine the location of the macula within a retinal image. An embodiment of Macula Detection step preferably comprises histogram equalization step to enhance image contrast, pixel binarization step to eliminate gray scales, pixel density map step to determine the region with the highest pixel density within the image, location constraints step based on angular position with respect to optic disc location, and probability map step to determine the most likely candidate pixels represent the macula. The most likely pixel on step is then preferably assigned the machine-calculated macula location. A probability map results from the steps carried out by the Macula Detection step on a retinal image. [0084] Left/Right Eye Detection step preferably uses one or more machine-coded algorithms running in a computing unit 600 to determine the side of the face a retinal image belong to. A Left/Right Eye Detection step preferably analyzes the vessels within the optic disc, and step compares the vessel density in the left half of the optic disc to the vessel density in the right half of the optic disc. The half with the lower vessel density is labeled as the temporal edge of the optic disc. If the temporal edge of the optic disc is on the right side of the retinal image then the retinal image is labeled as“left”. If the temporal edge of the optic disc is on the left side of the retinal image then the retinal image is labeled as“right”.
^
[0085] Alignment Check step preferably uses one or more machine-coded algorithms running in a computing unit 600 to determine whether a retinal image is correctly aligned. An embodiment of Alignment Check comprises steps to check that the macula position is at an angle between 0 degree and 20 degrees below the position of the optic disc with respect to the horizontal axis of the retinal image, to check that the distance between the optic disc center and the macula is more than two optic disc diameters and less than three optic disc diameters, to check that the optic disc and macula are positioned within one-third and two-thirds sections of the vertical image size, and to check that the macula is positioned within the vertical region limited by the temporal vessel arcades. When a retinal image passes the aforesaid checks, it is assigned two labels, one for eye side and one for field of view. The eye side label is preferably“OD” if it is a right eye image or“OS” if it is a left eye image. The field of view label is preferably“F1” if it is a disc-centered image or“F2” if it is macula-centered image. An image that fails one or more checks is preferably assigned the label“Misaligned” or“unacceptable”. [0086] Crescent/Shadow Detection step preferably uses one or more machine-coded algorithms running in a computing unit 600 to determine whether a retinal image includes crescents and/or shadows. Crescents occur from iris reflection due to small pupils or misalignments by the photographer. The reflected light shows up as a bright region towards the edge of the image. The brightness of the crescent is generally strongest at the boundary of the image and fades toward the center of the image. Shadows are caused by not enough light reaching the retina and creating dark regions on the images. Both crescents and shadows obscure parts of the retina that may have important clinical information. For crescent detection, embodiments of Crescent/Shadow Detection step comprise creating a mask that contains the retinal image, applying a gradient filtering algorithm to detect gradual changes in pixel value towards the center of the image, enhancing the gradient image, and determining whether the number of pixels in the third quartile of the gradient image occupy more than a preferable percent (e.g. 15%) of the image area. When the number of pixels in the third quartile of the gradient image occupies more than the predetermined percent of the image area the image is assigned the label of“inadequate”. It will be known to those skilled in the art that different combination of masks, gradient image, percentiles, and areas covered by crescents may be used in the present invention. For shadow detection embodiments of Crescent/Shadow Detection step comprise applying a pixel value threshold to the pixels of the retinal image, inverting the values of the thresholded image, counting the numbers of pixels within 10 percent of saturation, and comparing this number of pixels to 25 percent of the total number of pixels in the retinal image. When the number of pixels within 10 percent of saturation in the thresholded image is preferably larger than 25 percent of the total number of pixels in the image, the image is assigned the preferably label“inadequate.” It will be known to those skilled in the art that different combinations of threshold values, percentiles, and areas covered by shadows may be used in the present invention. [0087] Embodiments of the present invention preferably provide an automated approach based on the classification of global and local features that correlate with the human perception of retinal image quality as assessed by eye care specialists. Quantitative Image Quality step preferably uses one or more machine-coded algorithms running in a computing unit 600 to determine whether a retinal image includes image quality defects associated with visual perception of human experts. Visual perception information comprises a dataset set of retinal images which have been assigned labels by expert readers of retinal images. Said labels are preferably“adequate” for retinal images that include adequate quality and “inadequate” for retinal images that do not include adequate quality. Adequacy comprises the expert’s subjective assessment of whether an image includes adequate quality to make a clinical determination including but not limited to presence of lesions that may indicate the presence of disease. The dataset of images and associated labels is used as a training set for quantitative classification. [0088] Feature extraction comprises calculation of mathematical image features of one or more categories. An embodiment of feature extraction comprises four categories: vessel density features, histogram features, texture features, and local sharpness features. The overall image content, such as lightness homogeneity, brightness, and contrast are preferably measured by global histogram and textural features. The sharpness of local structure, such as optic disc and vasculature network, is preferably measured by a local perceptual sharpness metric and vessel density. It will be known to those skilled in the art that different categories of mathematical image features and combinations thereof are possible and are all part of the present invention.
^
[0089] Vessel density features are used in order to check the sharpness of dark vessel structures since the performance of vessel segmentation is sensitive to the blurriness of vasculature. To segment the retinal vascular, a method based on Hessian eigen system and the second order local entropy thresholding can be used (H Yu, S Barriga, C Agurto, G Zamora, W Bauman and P Soliz, Fast Vessel Segmentation in Retinal Images Using Multiscale Enhancement and Second-order Local Entropy, accepted by SPIE medical imaging, Feb, 2012, San Diego, USA), Vessel segmentation is performed after illumination correction and adaptive histogram equalization to remove uneven lightness and enhance contrast in the green channel of the retinal images. Vessel density is preferably calculated as the ratio of the area of segmented vessels over the area of field of view (FOV) in an image.
^
[0090] Preferably seven histogram features are extracted from the two or more color spaces, e.g. RGB: mean, variance, skewness, kurtosis, and the first three cumulative density function (CDF) quartiles. These seven histogram features describe the overall image information such as brightness, contrast, and lightness homogeneity. Computed first order entropy and spatial frequency may also be used to detect the image complexity.
^
[0091] Texture features are used in identifying objects in an image. The texture information can be obtained by computing the co-occurrence matrix of an image. Preferably five Haralick texture features are calculated: the second order entropy, contrast, correlation, energy and homogeneity (Haralick, R.M., K. Shanmugan, and I. Dinstein, "Textural Features for Image Classification", IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-3, 1973, pp. 610-621). Entropy measures the randomness of the elements of the matrix, when all elements of the matrix are maximally random, entropy has its highest value. Contrast measures the intensity difference between a pixel and its neighbor. The correlation feature measures the correlation between the elements of the matrix. When correlation is high the image will be more complex than when correlation is low. The fourth feature, energy describes the uniformity of the texture. In a homogeneous image there are very few dominant grey-tone transitions, hence the co- occurrence matrix of this image will have fewer entries of large magnitude. So the energy of an image is high when the image is homogeneous. The last feature, homogeneity, also called inverse difference moment, has a relatively high value when the high values of the co-occurrence matrix are near the main diagonal. It is a measure of coarseness in an image.
[0092] A clear and sharp edge is important for good quality of an image. A local sharpness metric, the cumulative probability of blur detection (CPBD), is applied on the green channel to measure the strength of sharp edges in the images. Average edge width and gradient magnitude have been used to measure blurriness of images, but these have been found to be too simplistic to directly correspond to human visual perception of blurriness, which is a complicated process. An embodiment of the local sharpness feature step uses the cumulative probability of blur detection (CPBD) at every edge (N. D. Narvekar and L. J. Karam, "A No-Reference Image Blur Metric Based on the Cumulative Probability of Blur Detection (CPBD)," IEEE Transactions on Image Processing, Vol. 20 (9), 2678-2683, Sep. 2011). CPBD assumes that the blurriness around an edge is more or less noticeable depending on the local contrast around that edge. It derives a human perceptibility threshold called "Just Noticeable Blur" (JNB), which can be defined as the minimum amount of perceived blurriness around an edge given a contrast higher than the "Just Noticeable Difference" (JND). The perceived blurriness around an edge is counted only when its amount is larger than the "Just Noticeable Blur" under Just Noticeable Difference" in contrast. It can be modeled as follows:
Figure imgf000028_0001
where PB(ef) is the probability of blur detection at each edge ei. If the actual width of the edge is the same as the JNB edge width, then
Figure imgf000028_0002
, below which blur is said to be undetectable. β has a median a ue of 3.6 based on an experimentally determined psychometric function (Ferzli, R. Karam, L. No- reference Objective Image Sharpness Metric Based on the Notion of Just Noticeable Blur IEEE
Transactions on Image Processing, pp. 717 728. 2009). The cumulative probability is based on the sum of the blur probabilities that are below 63%. The CPBD value is therefore negatively correlated to the edge blurriness. To measure the local edge blurriness, the image is divided into blocks of a size of 64x64, and the ratio of edge blocks, in which the number of edge pixels is more than a certain threshold, over the number of all blocks, is used as another feature since some hazy retinal images due to cataract may appear foggy and misty with sharp edge underneath.
[0093] Quantitative Classification preferably comprises a dataset of reference labels of image quality and associated mathematical features to produce a mathematical predictor of image quality. Further, Quantitative Classification preferably comprises a mathematical predictor of image quality to predict the image quality label of an image using said image's mathematical features. In an embodiment of the present invention a predictor of image quality is constructed by determining the relationship between the reference set of image quality labels and their associated mathematical features. To predict the image quality label of a retinal image, the predictor of image quality uses said relationship to assign said image a label of either“adequate” or“inadequate” on the basis of the mathematical features of said image. An embodiment of the mathematical predictor of image quality comprises a partial least squares (PLS) classifier. PLS is a very powerful method of eliciting the relationship between one or more dependent variables and a set of independent (predictor) variables, especially when there is a high correlation between the input variables. In the present invention, the same feature when applied to different color channels tends to be highly correlated even if their magnitudes are quite different. PLS finds a lower dimensional orthogonal sub-space to the multi-dimensional feature space and provide robust prediction. It will be known to those skilled in the art that other prediction models exist and can be used alone or in combinations to obtain labels of image quality. [0094] IQ Classification phase 350 is preferably implemented as machine-coded algorithms running in a computing unit 600¸ and preferably assigns a label to a retinal image that is indicative of its image quality and suggestive of an action that the photographer can take, such as“Valid” or“Invalid”, “acceptable” or“unacceptable”,“adequate” or“inadequate”, or any other label obvious to anyone skilled in the art, to an RGB retinal image on the basis of the results from the Analyze Image Quality phase 340. A label“Valid” is preferably assigned when the Crescent/Shadow step outputs an“adequate” label, the alignment check step outputs an“acceptable” label, and the Quantitative Classification outputs an “ d uate” label. A label“Invalid” is preferably assigned when any of the labels assigned by the Crescent/Shadow step, the alignment check step, or the Quantitative Classification step are“inadequate”. When a retinal image is assigned a label“Invalid” the user should retake the image using the retinal camera 200. [0095] Referring to FIG. 2, the Malarial Retinopathy (MR) Detection step 400 preferably processes one or more retinal images to determine whether said images include signs of MR. The Malarial Retinopathy Detection step 400 preferably comprises one or more machine-coded mathematical algorithms running in a computing unit 600 to assign a label to a retinal image indicative of the presence or absence of MR. In an embodiment of the present invention, said labels are descriptive of the action to be taken by a health care provider on the basis of the status of MR. [0096] The Malarial Retinopathy Detection phase 400, detailed in FIG.2, comprises one or more tools or applications to assess the presence of one or more MR retinal lesions or retinal abnormalities in retinal images. Said tools or applications are preferably implemented as machine-coded algorithms running in a computing unit 600. In an embodiment of the Malarial retinopathy detection step 400, a set of Reference Images 405 determines the valid retinal images for further processing, the Retinal whitening detection step 410 assess the likelihood of presence of retinal whitening and whitening location in the retinal image, the Hemorrhage detection phase 420 assesses the likelihood of presence of hemorrhages and hemorrhage location in the retinal image, the Vessel discoloration detection step 430 detects likelihood of presence of vessel discoloration and vessel discoloration location in the retinal image. The results of the retinal whitening detection 410, Hemorrhage detection 420, and Vessel discoloration detection 430 steps are preferably passed to the Lesion based classification phase 440 to assess the likelihood of presence of MR, which is preferably passed to the MR detection phase 450 to assess the presence or absence of MR based on quantitative likelihood value for presence of MR determined by a classifier. [0097] The set of Reference Images 405 preferably comprises images containing a plurality of plausible grades of detectable MR and comprising also cases of no detectable MR. As shown by the arrows in FIGS. 7(A), 7(B), and 7(C), examples of lesions that these images comprise retinal whitening (701), hemorrhages (702), and vessel discoloration (703), respectively. These images are used as examples of different types of detectable lesions indicative of MR and as the basis for MR detection Module 450. The set of Reference Images 405 preferably further comprises descriptive labels according to the grades of detectable MR and these may be referred to as ground truth. [0098] The retinal whitening detection step 410, detailed in FIG. 8, comprises one or more machine-coded algorithms running in a computing unit 600 to assess the presence of whitening and assess the location of whitening within a retinal image or set of images. Retinal whitening is thought to be a manifestation of hypoxia, and it has been postulated that it is the result of oncotic cell swelling of neurons in the inner retinal layer in response to hypoxic stress. The severity and pattern of whitening in MR is unique and highly specific to MR. As shown in FIG. 8, the whitening detection step 410 preferably comprises of a pre-processing step 411 to minimize camera reflex, marker-controlled watershed segmentation step 415 to form image splats, feature extraction step 416 to determine and extract image features indicative of whitening, feature classification step 417 using a partial least square classifier, false positive reduction step 418, and image classification step 419 to assess the most likely class of image representing if the image contains retinal whitening. The most likely class assessed in step 419 is then preferably assigned to the image or set of images. FIG.10.C illustrates an example of the probability map (1003) resulting from the steps carried out by the Whitening Detection phase 410 on a retinal image. [0099] The pre-processing step 411 preferably automatically minimizes the effect of camera reflex in the retinal image, therefore, decreasing the number of false positives resulting from imaging artifacts such as reflex from the internal limiting membrane, as the color features of whitening largely overlap with that of the reflex. As described in FIG. 9, a true whitening appears as creamy or fuzzy white in color (dense striped annotations in FIG. 9), whereas a reflex presents with shiny and bright white color (sparse striped annotations in FIG. 9). To minimize the effect of reflex, step 412 comprises one or more image processing algorithms to extract color features from the color spaces such as green (RGB color space), k (CMYK color space), X and Z (CIE-XYZ color space), and‘L’ (HSL color space). The color spaces preferably provide a significant distinction between a true whitening region and reflex beyond what a human observer can distinguish. The color channels are combined using a pixel-based image multiplication operation to improve contrast between true whitening and reflex. Furthermore, white top-hat transform is applied to the contrast-improved image, which highlights the reflex and separates it from true whitening (transform image). [00100] The pre-processing step 411 also comprises of a textural feature based processing step 413 which preferably uses textural features to distinguish between true whitening and reflex such as but not limited to, smooth and fuzzy texture of whitening versus the sharp texture of reflex with strong edges, which are preferably used by a retinal reader in differentiating the whitening from reflex. This step extracts features using the textural filter functions such as but not limited to, entropy, range, and standard deviation, applied to the top-hat transformed image obtained at the output of step 412. Furthermore, the t t e-filtered image obtained at the output of step 413 is post-processed in step 414 for contrast enhancement using contrast equalization technique and such others, which highlights the reflex region. Furthermore, in the step 414, the contrast-equalized image is subtracted from the original image using a pixel-based image subtraction operation, and subjected to illumination normalization, Gaussian smoothing, and noise removal. The preferable outcome of step 414 minimizes the reflex and preserves true whitening. [00101] The output image obtained in step 414 is used for further processing comprising one or more image processing algorithms for assessment of whitening. The watershed segmentation step 415 preferably uses techniques such as but not limited to, marker controlled watershed segmentation, applied to the green channel of the output image in step 414. The step 415 divides the image into number of watershed regions called splats based on region homogeneity (image regions with similar intensity i l ). The step 415 preferably uses gradient magnitudes of the green channel at multiple scales and utilizes their maximum for segmentation. Furthermore, the watershed segmentation of the gradient magnitude image is performed, which generates the splats. The feature extraction step 416 preferably extracts multiple color-, morphology-, and texture-based features from each of the splats of processed image for color channels such as‘k’ (CMYK color space), green (RGB color space), and‘X and Z’ (CIE- XYZ color space); to form a feature vector. A feature vector is a set of features that describe color, textural, morphological properties etc. of image structures, and are represented in a quantitative format comprising of continuous or discrete as well as signed (+/-) or unsigned values. The features included in a feature vector follow the standard definitions of image properties that are universally accepted. The feature vector used for the detection of whitening preferably comprises: 1) Haralick textural features such as contrast, correlation, energy, homogeneity, entropy, dissimilarity, and other textural features obtained from a gray-level co-occurrence matrix (22 features), 2) Statistical features: median, standard deviation, kurtosis, skewness, moments, and percentile at various values (10 features), 3) Morphological features: Eccentricity, Solidity, Bounding box. The feature vector preferably comprises a multitude of features but other significant color and textural features can be included. The classification step 417 preferably integrates the extracted feature vector for classifier training and testing. The features are normalized to have‘zero’ mean and‘unit’ standard deviation. Furthermore, partial least square (PLS) classifier is trained to learn the properties of whitening based on extracted features, but other classifiers can be used. Based on classification output, each splat in the test images is preferably assigned a probability of being inside the whitening region (Figure 10(C): 1003). FIG. 10(A) shows the corresponding retinal image with whitening depicted as a dense striped pattern (1001) and FIG. 10(B) shows ground truth annotated for whitening (1002). The probability map can preferably be thresholded at various values to assess whitening with different sensitivity/specificity characteristics. The false positive reduction step 418 f ably uses morphological features such as but not limited to, size, eccentricity, solidity, and bounding box of the detected candidates for whitening, to minimize false positive detections for whitening. The image classification step 419 preferably assess the presence or absence of whitening in a given image using features extracted from the image such as but not limited to, 1) Maximum probability assigned to a splat in the image, 2) Sum of non-zero probabilities assigned to splats, 3) Product of non-zero probabilities assigned to splats. Furthermore, PLS classifier preferably uses one or more image-based features extracted in step 419 to assess a likelihood for each image if it contains the whitening. Furthermore, a label is assigned to the given image or set of images based on the likelihood value, as “Whitening present” or“Whitening absent”. [00102] Hemorrhage detection step 420, detailed in FIG. 11, uses one or more machine-coded algorithms running in a computing unit 600 to assess the presence of hemorrhages and assess the l tion of the hemorrhages within a retinal image or set of images. Retinal hemorrhages of MR are predominately white-centered, intra-retinal and blot like hemorrhages similar to Roth spots, which most commonly involve the inner retinal layers. They can occur in isolation, in small groups, or in large clusters. FIG. 12.A shows examples of MR hemorrhages (1201) and FIG. 12(B) shows ground truth annotation of hemorrhages (1202). A human grader manually analyzes the color and intensity of retinal hemorrhages and annotates them to produce a ground truth. As shown in FIG. 11, an embodiment of hemorrhage detection step 420 preferably comprises of a supervised classification step 421, unsupervised detection step 423, feature extraction step 424 to determine and extract image color features indicative of hemorrhages within the image, a pixel-based mathematical operation step 425, a watershed segmentation step 426, a hybrid method in step 427, and a false positive reduction step 428, and image classification step 429 to determine the most likely class of image representing if the image contains hemorrhages. The most likely class in step 429 is then preferably assigned to the image or set of images. FIG. 12(C) illustrates an example of the probability map (1203) resulting from the steps carried out by the Hemorrhage Detection step 420 on a retinal image. [00103] A supervised classification step 421 comprises extraction of features including color, difference of Gaussians, local contrast; and preferably a use of k-Nearest Neighbor classifier to assess the presence of hemorrhages in a retinal image. An unsupervised classification step 423 for detection of hemorrhages comprises extraction of color features in step 424 from various color spaces such as but not limited to,‘a’ (Lab color space),‘u’ (Luv color space),‘Q’ (YIQ color space),‘C’ (LCH color space). Furthermore, a pixel-based mathematical operation step 425 preferably combines the aforementioned four color channels and such others using a pixel-based image multiplication operation where each output pixel’s value depends on the multiplication of corresponding input pixel values. The step 425 improves the t ast of hemorrhage-lesions relative to a retinal background. Next, a watershed segmentation step 426 comprises processing the contrast-improved image in step 425, to assess the gradient magnitude and form the watershed region boundaries. The output of step 426 comprises a segmentation image preferably assigned with a probability to each watershed region of being inside a hemorrhage. The unsupervised detection step 423 preferably detects the small/subtle hemorrhages; whereas, the supervised classification step 421 preferably assesses the presence and segments large hemorrhages. [00104] Furthermore, a hybrid method in step 427 preferably comprises combining the outputs of both supervised classification step 421 and unsupervised detection step 423 that results in a hybrid image (Figure 12.C). The false positive reduction step 428 preferably uses morphological features such as eccentricity, solidity, and bounding box of the detected candidates for hemorrhages, to minimize false positive detections of hemorrhages. The image classification step 429 preferably assesses a likelihood for h image if it contains the hemorrhages, by counting the number of hemorrhages detected in the given image, and preferably using the count to assess the likelihood value. Furthermore, a label is assigned to the given image or set of images based on the likelihood value, as “Hemorrhages present” or “Hemorrhages absent”. [00105] Vessel discoloration detection step 430, detailed in FIG. 13, comprises one or more machine-coded algorithms running in a computing unit 600 to assess the presence of discolored vessels and assess the location of the vessel discoloration within a retinal image or set of images. Retinal vessel changes are a feature uniquely associated with MR. Vessel discoloration presents as discoloration from red to orange or white, primarily in the retinal periphery. FIG. 14(A) shows an example of vessels presenting with discoloration due to MR (1401). A human grader manually analyzes the color and intensity changes in retinal vessels and annotates the vessels that contain discoloration, which is considered as a ground truth. FIG. 14(B) presents the ground truth annotation of discolored vessels as dashed lines (1402). As shown in FIG. 13, the vessel discoloration detection step 430 preferably comprises of a vessel segmentation step 431, normal vessel identification step 432, feature extraction step 435 to determine and extract image features indicative of vessel discoloration within the image, feature space reduction step 436, feature-based classification step 437 based on partial least square classifier, and image classification step 439 to assess the most likely class of image representing if the image contains vessel discoloration. The most likely class in step 439 is then preferably assigned to the image or set of images. FIG. 14(C) illustrates an example of the detection of discolored vessels (marked as dotted lines: 1404) resulting from the steps carried out by the Vessel Discoloration Detection step 430 on a retinal image. [00106] The vessel segmentation step 431 comprises modifications to a previously developed vessel segmentation algorithm by incorporating information from green channel of RGB color space and ‘a’ channel of the CIE-Lab color space that represents a green-red component of the vessel pixels. The ‘a’ channel is useful in segmenting small, subtle vessels and especially the discolored vessels which are not segmented accurately using previously developed segmentation algorithm based on green channel features alone. The analysis of the ground truth annotations for vessel discoloration (dashed line annotation (1402) in FIG.14(B)) demonstrates that the wider vessels (FIG. 14(A): 1406) close to the optic disc (FIG. 14(A): 1405) do not present with discoloration, and can be used as an indicator of normal vessel color. Therefore, this information is used in step 432 for identifying the normal vessel color within each image, by automatically identifying the dark wider vessels closer to the optic disc, using contrast enhancement and intensity thresholding. Furthermore, in step 432 the mean value is calculated for all al vessel pixels, for each color channel. The algorithm then identifies the discolored vessels by calculating the intensity difference between each pixel of the remaining vasculature and the mean intensity value of normal vasculature. [00107] The feature extraction step 435 comprises a feature extraction process based on three factors: 1) color of the discolored vessels is orange or white, 2) edge of the wall of discolored vessels has low contrast, and 3) regions close to the vessel wall get discolored first when the pathology appears. Therefore, the step 435 comprises extracting statistical image features such as median, standard deviation, kurtosis, skewness, moment, and 10th, 25th, 50th, 75th, 95th percentile of the content in the segment. The statistical features are preferably extracted from various color spaces such as RGB, HSL, CIE-XYZ, CMYK, Luv, Lch; preferably in the center and wall areas of the vessels. The feature vector also includes gradient features such as but not limited to, the gradient of the‘red’,‘green’ and‘a’ channels to measure the contrast of the vessel. [00108] The feature space reduction step 436 comprises reducing the redundant features from the set of 440 features. The total set comprises a plurality of image representations, e.g. 22 (19 color channels and 3 gradients), each in terms of a plurality of statistical features, extracted from both inner and outer part of vessel segments. The step 436 uses ANOVA or other similar statistical techniques during the training phase, to assess which of the features are the most significant (p < 0.05) and reduces the feature space by 75%. Furthermore, the feature-based classification step 437 uses one or more image processing techniques to normalize the features to have mean‘zero’ and‘unit’ standard deviation. Furthermore, the normalized feature vector is integrated using a partial least squares (PLS) classifier to classify the discolored vessel segments using leave-one-out validation. FIG.14(C) depicts a retinal image ith automated detection of discolored vessels marked as dotted lines (1404), and also shows the discolored vessels missed by the algorithm, marked as dashed-dotted lines (1403). The image classification step 439 preferably assesses a likelihood for each image if it contains the vessel discoloration, by counting the number of discolored vessels detected in the given image, and preferably using the count to assess the likelihood value. Furthermore, a label is assigned to the given image or set of images based on the likelihood value, as“Vessel discoloration present” or“Vessel discoloration absent”. [00109] Lesion based classification step 440, detailed in FIG. 15, is preferably implemented as machine-coded algorithms running in a computing unit 600, to assess a likelihood of detecting the presence of Malarial retinopathy. The Malarial retinopathy detection step 450 comprises the likelihood value from the lesion based classification step 440, to assign a label to an RGB retinal image or set of i es that is indicative of presence of MR and suggestive of an action that physician can take; such as “MR present” or“MR absent”,“High risk of MR” or“Low risk of MR”, or“indeterminate”, or any other label obvious to anyone skilled in the art. As shown in FIG. 15, MR detection step 450 preferably assigns a label“MR present” to an RGB retinal image, when the Quantitative PLS regression Classification step 444 outputs higher likelihood value (between 0 and 1) for presence of MR, based on the whitening detection in step 410 that outputs a likelihood of presence of whitening, the hemorrhage detection in step 420 that outputs a likelihood of presence of hemorrhages, vessel discoloration detection in step 430 that outputs a likelihood of presence of vessel discoloration. The MR detection step 450 preferably assigns a label“MR absent” to an RGB retinal image, when the Quantitative PLS regression Classification step 444 outputs lower likelihood value (between 0 and 1) for presence of MR, based on the whitening detection in step 410 that outputs a likelihood of presence of whitening, the hemorrhage detection in step 420 that outputs a likelihood of presence of hemorrhages, vessel discoloration detection in step 430 that outputs a likelihood of presence of vessel discoloration. The interpretation of the“likelihood value for presence of MR” as low or high, or a threshold classifying the likelihood value as low or high, is based on the retinal image dataset or the patient population under investigation, and is determined by one or more clinical protocols based on the requirement of sensitivity and specificity of MR detection. It will be known to those skilled in the art that different values of a threshold may be used in the present invention, according to one or more clinical protocols. When a retinal image is assigned a label“MR present” the user should confirm the presence of CM and determine treatment strategies. When a retinal image is assigned a label “MR absent” the user should investigate other causes of coma and parasitic infection. [00110] The goal of MR detection software system is to reduce the false positive rate (FPR) by accurately diagnosing patients for the presence or absence of cerebral malaria based on retinal signs of M l ial retinopathy. This makes the system to operate preferably at high specificity. Therefore, the individual MR lesion detection algorithms are preferably tuned at high specificity settings. [00111] Experimental results from application of one of the embodiments of the present invention as described herein have shown its utility in detecting malarial retinopathy in retinal images and are explained in the next paragraphs. [00112] A retrospective dataset of N=86 retinal color images obtained from pediatric patients clinically diagnosed for CM, was provided by the University of Liverpool. Of those 86 images, N=70 patients presented with signs of MR and N=16 with no MR signs. The images were collected between years 2006 and 2014, at the Pediatric Research Ward at the Queen Elizabeth Central Hospital (QECH), University of Malawi College of Medicine (UMM), Blantyre, Malawi, Africa. The images were captured i a Topcon 50EX retinal camera with a field of view (FOV) of 50-degree. All images were anonymized by the retinal reading center at the University of Liverpool and provided under a numeric identifier to the authors of this study. The multiple images of each patient were stitched to create a mosaic image. Of the 70 images with MR, 67 images had retinal whitening, 57 had white-centered hemorrhages, 49 had vessel discoloration, with some images having multiple MR lesions present. [00113] A retinal reader who was trained at retinal reading center at the University of Liverpool annotated and graded the 86 mosaic images. The detailed annotations used a color-coding scheme to denote different MR lesions. FIGS 16(A) and 16(B) show a retinal image and its ground truth annotation, respectively. In the ground truth (FIG. 16(B)), retinal whitening is annotated as dotted-striped pattern (1602), vessel discoloration as dashed lines (1604), and hemorrhages as solid striped pattern (1603). The annotations were validated by an ophthalmologist with expertise in MR. The annotated mosaic database was used to develop image processing algorithms to detect the three retinal signs of MR. [00114] In the embodiments of the present invention described herein for development of one or more image processing algorithms for detection of three MR lesions and Malarial retinopathy, the algorithm performance was calculated in terms of image-based classification. For each image, the likelihood maps obtained for detection of a lesion were converted to a binary image using various thresholds and were compared against the annotated ground truth image. The image-based analysis determines the algorithm performance in classifying each retinal image as that with or without a lesion. This classification technique considers a given image as positive detection, if at least one of the lesions annotated in the ground truth, is detected by the algorithm at a given threshold. Image-based sensitivity is defined as the fraction of images marked positive in the ground truth, that were detected as positive by th lgorithm, while an image-based specificity was defined as the fraction of images marked negative in the ground truth, that were detected as negative by the algorithm. For the image-based classification analysis, a receiver operating characteristic (ROC) curve is determined. Since the aim of MR detection algorithms is to reduce false positive rate (FPR) in CM diagnosis and achieve high specificity, the algorithms are tuned to a high specificity setting (>90%). The dataset of N=86 images is utilized for testing each algorithm using leave-one-out methodology. [00115] In the given dataset, whitening is present in 67 images and 19 images present with no whitening. With the ultimate goal of determining the presence or absence of whitening in a given image, three features were extracted from each image, i.e. 1) Maximum probability assigned to a splat in the image, 2) Sum of non-zero probabilities assigned to splats, 3) Product of non-zero probabilities assigned to splats. The PLS classifier is used to classify each image as that with or without the whitening, based on th bove image features. The classification of images with or without whitening achieved AUC of 0.81, with specificity of 94% and sensitivity of 65%. The algorithm can also be operated at high sensitivity point of 78% at specificity of 65%. [00116] The dataset has a distribution of 57 images with hemorrhages and 29 images without hemorrhages. The average number of hemorrhages per image is 15. In order to classify each image as with or without hemorrhages, the number of hemorrhages detected by algorithm are counted in the given image and the count is used to determine if the image has hemorrhages. The classification of images with or without hemorrhages achieved an AUC of 0.89, with specificity of 96% and sensitivity of 73%. The algorithm can also be operated at high sensitivity point of 100% at specificity of 78%. [00117] The dataset contains 49 images with vessel discoloration and 37 without discoloration. In order to calculate the performance for classifying each image as with or without vessel discoloration, the number of vessel segments classified as containing discoloration are counted in the given image and the count is used to determine if the image has discolored vessels. The image-based classification obtained an AUC of 0.85 with specificity of 100% and sensitivity of 62%. The algorithm can also be operated at high sensitivity point of 90% at specificity of 66%. [00118] The individual MR lesion detection algorithms were integrated using a PLS classifier for detecting the presence or absence of MR with high specificity, while maintaining a minimum sensitivity of 90% for MR detection. The MR detection algorithm yielded an AUC of 0.97 with specificity of 100% at sensitivity of 95%. The positive predictive value (PPV) for MR detection was 0.98. PPV was not calculated for individual MR lesions due to unavailability of prevalence data per lesion. Table 1 shows the performance of detection algorithms for vessel discoloration, hemorrhages, retinal whitening, and MR d t tion; in terms of AUC, and tuned at high specificity while maintaining moderate sensitivities, with the respective 95% confidence intervals (CI).
Figure imgf000038_0001
[00119] The results demonstrate that the automated software for MR detection can be used as a means of detecting MR lesions and for diagnosing the presence or absence of MR with high accuracy. The AUCs for individual lesion detection in the range of 0.81 - 0.89 indicate the capability of respective algorithms in distinguishing between an MR pathology and retinal background. The AUC of 0.97 for MR detection indicates that, given a random retinal image of a clinically diagnosed CM patient, the proposed MR detection system will correctly identify presence or absence of MR in about 97% of the patients. This system operating at a high specificity setting for diagnosis of MR, at 100% specificity and 95% sensitivity, means a large reduction in the false positive diagnosis of cerebral malaria cases. [00120] In summary, the experimental results presented herein show an automated method for detection of MR lesions and a regression classifier that categorizes a patient-case into MR/no-MR, using retinal color images. The software algorithms perform with sufficient accuracy that enables a highly specific detection of MR, which can improve the specificity and positive predictive value of CM diagnosis. [00121] In one embodiment of the present invention, computer systems and communication infrastructure are composed of frame buffer (not shown) to the display monitor. Computing Unit 600 preferably includes a main memory, for example random access memory (“RAM”), read-only memory (“ROM”), mass storage device, including an automated onsite and off-site backup system. Computing Unit 600 may also include a secondary memory such as a hard disk drive, a removable storage drive, an interface, or any combination thereof. Computing Unit 600 may also include a communications interface, for example, a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, wired or wireless systems, or other means of electronic communication. Computer programs are preferably stored in main memory and/or secondary memory. Computer programs may also be received via communications interface. Computer programs, when executed, enable the computer system, particularly the processor, to implement the methods according to the nt invention. The methods according to embodiments of the present invention may be implemented using software stored in a computer program product and loaded into the computer system using removable storage drive, hard drive or communications interface. The software and/or computer system described herein may perform any one of, or any combination of, the steps of any of the methods presented herein. It is also contemplated that methods according to embodiments of the present invention may be performed automatically, or may be invoked by some form of manual intervention. [00122] Embodiments of the present invention are also directed to computer products, otherwise referred to as computer program products, to provide software to the computer system. Computer products store software on any computer useable medium. Such software, when executed, implements the methods according to one embodiment of the present invention. Embodiments of the present invention employ any computer useable medium, known now or in the future. Examples of computer ble mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory), secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, tapes, magnetic storage devices, optical storage devices, Micro-Electro-Mechanical Systems (“MEMS”), nanotechnological storage devices, cloud storage, etc.), and communication mediums (e.g., wired and wireless communications networks, local area networks, wide area networks, intranets, etc.). It is to be appreciated that the embodiments described herein can be implemented using software, hardware, firmware, or combinations thereof. The computer system, or network architecture, of FIG. 17 is provided only for purposes of illustration, such that the present invention is not limited to this specific embodiment. It is appreciated that a person skilled in the relevant art knows how to program and implement the invention using any computer system or network architecture. [00123] Image Quality Analyzer phase 300 and MR detection phase 400 preferably use machine- coded mathematical algorithms running in a Computing Unit 600. Computing Unit 600 preferably comprises a self-contained unit comprising one or more human-machine interfaces, power supplies, memories, processing units, and storage units and is preferably connected directly to the Retinal Camera 200 through a communication medium that transports the live video feed 270. Further, a Computing Unit 600 is preferably integrated into the base of the Retinal Camera 200 such that said Computing Unit 600 does not add to the footprint and space requirements of the Retinal Camera 200. It will be known to those skilled in the art that there are alternatives to the Computing Unit 600 and that all said alternatives perform the same function as the Computing Unit 600 described herein. [00124] An embodiment of Computing Unit 600 comprises one or more machine-implemented tools and applications to distribute, manage, and process retinal images transmitted by the retinal camera 200 The Computing Unit 600 further comprises one or more types of computing memory 610, computing processors 620, and graphic processing units 630. FIG. 17 illustrates an embodiment of computing unit 600. The computing memory 610 preferably comprises solid state and magnetic memory media to store the image data to be processed as well as data generated during intermediate steps in the Image Quality Analyzer phase 300 and the MR detection phase 400. The computing processors 620 preferably execute machine-coded algorithms on solid state computer circuits according to the Image Quality Analyzer phase 300 and the MR detection phase 400. One or more graphic processing units 630 preferably comprise two or more parallelized solid state computing cores to divide processing tasks into pieces that can be done in parallel in said computing cores. [00125] FIG. 20 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 i l mented as a single board computer (SBC) 640 that can operate onboard the retinal camera 200. FIG. 19 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 implemented as a system on a chip (SOC) 650 that can be located onboard the retinal camera 200. [00126] FIG. 21 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 implemented in a personal computer (PC) 660 that can operate separate and independently from the retinal camera 200. [00127] FIG. 22 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 implemented as a cloud based system 670 through an Internet connection to a distributed data center 675 and operated separately and independently from the retinal camera 200. [00128] FIG. 23 illustrates an embodiment of the present invention comprising a Computing Unit 600 comprising computing memory 610, computing processors 620, and graphic processing units 630 implemented as a centralized processing center or central server 680 through a wired, wireless, or internet connection to one or more dedicated server computers 685 and operated separately and independently from the retinal camera 200. It will be known to those skilled in the art that there are various alternatives to implement a Computing Unit 600 and that these alternatives perform the same function as the Computing Unit 600 described herein. [00129] It will be understood that the embodiments disclosed and defined in the specification herein extend to all alternative combinations of the individual features mentioned or evident from the text or drawings. All of these different combinations constitute various alternative aspects of the embodiments of the present invention. Although the invention has been described in detail with particular reference to these preferred embodiments, other embodiments can achieve the same results. Variations and modifications of the present invention will be obvious to those skilled in the art and it is intended to cover all such modifications and equivalents. The entire disclosures of all patents and publications cited above are hereby incorporated by reference.

Claims

CLAIMS WHAT IS CLAIMED IS: 1. A method to perform automatic malarial retinopathy detection, the method comprising the steps of:
illuminating a retina using an illumination source;
capturing a retinal image with a retinal camera;
transmitting the retinal image to a processor;
performing via the processor an assessment in real time of the image quality wherein image quality is determined by one or more of the image quality steps of:
determining alignment of the retinal image;
determining presence and extent of crescents and shadows in the retinal image; and
determining quantitative image quality of the retinal image via a classification process trained using examples of visual perception by human experts; and
adjusting the retinal camera if the image quality does not meet predetermined quality requirements; and
performing via the processor a detection in real time of Malarial retinopathy in the retinal i e wherein Malarial retinopathy is determined by one or more of the detection of Malarial retinopathy steps of;
determining the presence of retinal whitening in the retinal image;
determining the presence of hemorrhages in the retinal image;
determining the presence of vessel discoloration in the retinal image; and determining quantitative likelihood of presence of Malarial retinopathy in the retinal image via a classification process trained using examples of visual perception of Malarial retinopathy by human experts.
2. The method of claim 1 additionally comprising assigning a descriptive label to the retinal image as to its image quality.
3 The method of claim 2 wherein the performing an image quality assessment step classifies the retinal image according to a set of retinal image quality labels.
4. The method of claim 1 wherein the adjusting step employs a user interface to indicate a user quality of the retinal image and suggested actions to take with respect to the retinal camera.
5. The method of claim 1 wherein the performing the Malarial retinopathy detection step transforms the retinal image information into a plurality of color spaces.
6. The method of claim 5 wherein the performing the Malarial retinopathy detection step transforms the plurality of color space information into a plurality of color, texture, and statistical features.
7. The method of claim 6 wherein the performing the Malarial retinopathy detection step groups the plurality of color, texture, and statistical features into a plurality of feature vectors.
8. The method of claim 1 wherein the performing the Malarial retinopathy detection step determines presence and extent of retinal whitening in the retinal image according to one or more clinical protocols.
9. The method of claim 8 wherein the executing step employs a reduction of camera reflex in the image using color and textural features to distinguish between the reflex and true whitening.
10. The method of claim 8 wherein the executing step additionally employs color, textural, morphological, and statistical feature extraction step to form the feature vectors.
11. The method of claim 8 additionally comprising assigning a descriptive label to the image as to presence and extent of retinal whitening according to one or more clinical protocols.
12. The method of claim 8 wherein the executing step employs a threshold to assign labels to the image.
13. The method of claim 8 wherein the executing step additionally comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
14. The method of claim 1 wherein the performing the Malarial retinopathy detection step determines nce and extent of hemorrhages in the image according to one or more clinical protocols.
15. The method of claim 14 wherein the executing step additionally employs color, difference of Gaussians, contrast, and morphological feature extraction phase to form the feature vectors.
16. The method of claim 14 additionally comprising assigning a descriptive label to the image as to presence and extent of hemorrhages according to one or more clinical protocols.
17. The method of claim 14 wherein the executing step employs a threshold to assign labels to the image.
18. The method of claim 14 wherein the executing step additionally comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
19. The method of claim 1 wherein the performing the Malarial retinopathy detection step determines presence and extent of vessel discoloration in the image according to one or more clinical protocols.
20. The method of claim 19 wherein the executing step additionally employs color, intensity gradient, and statistical feature extraction phase to form the feature vectors.
21. The method of claim 19 wherein the executing step additionally employs a feature reduction h e to reduce number of features.
22. The method of claim 19 additionally comprising assigning a descriptive label to the image as to presence and extent of vessel discoloration according to one or more clinical protocols.
23. The method of claim 19 wherein the executing step employs a threshold to assign labels to the image.
24. The method of claim 19 wherein the executing step additionally comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
25 The method of claim 1 additionally comprising executing real time analysis of likelihood of presence of Malarial retinopathy in the retinal image.
26. The method of claim 25 additionally comprising assigning a label to the image indicative of the presence or absence of Malarial retinopathy according to the likelihood value of the output of method of claim 25.
27. The method of claim 25 wherein the executing step additionally comprises tuning one or more parameters to vary assignment of labels and/or performance of a classification model in terms of sensitivity and specificity.
28. The method of claim 25 wherein determining quantitative likelihood of presence of Malarial retinopathy in the retinal image via a classification process trained using examples of visual perception of Malarial retinopathy by human experts further comprises a high specificity classification based upon quantitative result.
29. The method of claim 25 wherein determining quantitative likelihood of presence of Malarial retinopathy in the retinal image via a classification process trained using examples of visual perception of Malarial retinopathy by human experts further comprises a high specificity classification based upon quantitative result comprises a high sensitivity classification model based upon quantitative result.
30. The method of claim 1 wherein the retinal camera includes an optical contact lens with wide field of view.
31. The method of claim 30 wherein the wide field of view is about 120-degrees or greater.
32. A system to perform automatic retinal screening, said system comprising:
an illumination source illuminating a retina;
a retinal camera capturing a retinal image;
a processor receiving the retinal image, performing an assessment in real time of the image quality, adjusting a setting of the retinal camera if the image quality does not meet predetermined quality requirements, and performing an assessment in real time of the presence or absence of Malarial retinopathy.
33. A method to automatically determine presence or absence of malarial retinopathy in a retinal i e, the method comprising the steps of:
illuminating a retina using an illumination source;
capturing the retinal image with a retinal camera; transmitting the retinal image to a processor;
performing via the processor an assessment in real time of the image quality;
performing via the processor an assessment in real time of the presence or absence of Malarial retinopathy, comprising:
detecting the presence or absence of retinal whitening;
detecting the presence or absence of hemorrhages;
detecting the presence or absence of vessel discoloration; and
detecting the presence or absence of malarial retinopathy via a classification process trained using example images containing retinal lesions of malarial retinopathy.
PCT/US2016/045051 2015-07-30 2016-08-01 System and methods for malarial retinopathy screening WO2017020045A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562198981P 2015-07-30 2015-07-30
US62/198,981 2015-07-30

Publications (1)

Publication Number Publication Date
WO2017020045A1 true WO2017020045A1 (en) 2017-02-02

Family

ID=57886876

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/045051 WO2017020045A1 (en) 2015-07-30 2016-08-01 System and methods for malarial retinopathy screening

Country Status (1)

Country Link
WO (1) WO2017020045A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341516A (en) * 2017-07-07 2017-11-10 广东中星电子有限公司 Picture quality adjusting method and image procossing intelligent platform
CN110264443A (en) * 2019-05-20 2019-09-20 平安科技(深圳)有限公司 Eye fundus image lesion mask method, device and medium based on feature visualization
WO2019206208A1 (en) * 2018-04-26 2019-10-31 上海鹰瞳医疗科技有限公司 Machine learning-based eye fundus image detection method, device, and system
WO2020160606A1 (en) * 2019-02-07 2020-08-13 Commonwealth Scientific And Industrial Research Organisation Diagnostic imaging for diabetic retinopathy
CN112017157A (en) * 2020-07-21 2020-12-01 中国科学院西安光学精密机械研究所 Method for identifying damage point in optical element laser damage threshold test
WO2023285287A1 (en) * 2021-07-14 2023-01-19 Oncotech Ltd A computer implemented method for determining a probability of a disease in at least one image representative of an eye
CN116188810A (en) * 2023-04-25 2023-05-30 浙江一山智慧医疗研究有限公司 Method, device and application for extracting characteristics of optic disc
CN117495817A (en) * 2023-11-10 2024-02-02 佛山市禅一智能设备有限公司 Method and device for judging abnormal images of blood vessels under endoscope

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110242306A1 (en) * 2008-12-19 2011-10-06 The Johns Hopkins University System and method for automated detection of age related macular degeneration and other retinal abnormalities

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110242306A1 (en) * 2008-12-19 2011-10-06 The Johns Hopkins University System and method for automated detection of age related macular degeneration and other retinal abnormalities

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JOSHI ET AL.: "Automated Detection of Malarial Retinopathy-Associated Retinal Hemorrhages'';", INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, vol. 53, no. 10, September 2012 (2012-09-01), pages 6582 - 6588, XP055350569, Retrieved from the Internet <URL:https://www.ncbi.nim.nih.gov/pmclarticles/PMC3460387/pdf/il552-5783-53-10-6582.pdf> [retrieved on 20160923] *
LOCHHEAD ET AL.: "The effects of hypoxia on the ERG in paediatric cerebral malaria'';", EYE;, vol. 24, no. 2, February 2010 (2010-02-01), pages 259 - 264, XP055350573, Retrieved from the Internet <URL:http://www. nature .comleye/journal/v24/n2/pdf/oyo2009162a.pdf> [retrieved on 20160923] *
RIOS ET AL.: "Clinical Impact Of Image Quality Assessment In The Performance Of An Automated Diabetic Retinopathy Screening System'';", ASSOCIATION FOR RESEARCH IN VISION AND OPHTHALMOLOGY;, 2014, pages 1, XP055350570, Retrieved from the Internet <URL:http://visionquest-bio.comNQ_pdf/Carla_ 04-18-2014 web.pd f> [retrieved on 20160923] *
SHIELDS ET AL.: "Wide-angle Imaging of the Ocular Fundus'';", REVIEW OF OPHTHALMOLOGY;, 15 February 2003 (2003-02-15), pages 1 - 10, Retrieved from the Internet <URL:https://www.reviewofophthalmology.com/article/wide-angle-imaging-of-the-ocular-fundus> [retrieved on 20160926] *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341516A (en) * 2017-07-07 2017-11-10 广东中星电子有限公司 Picture quality adjusting method and image procossing intelligent platform
WO2019206208A1 (en) * 2018-04-26 2019-10-31 上海鹰瞳医疗科技有限公司 Machine learning-based eye fundus image detection method, device, and system
WO2020160606A1 (en) * 2019-02-07 2020-08-13 Commonwealth Scientific And Industrial Research Organisation Diagnostic imaging for diabetic retinopathy
CN110264443A (en) * 2019-05-20 2019-09-20 平安科技(深圳)有限公司 Eye fundus image lesion mask method, device and medium based on feature visualization
CN110264443B (en) * 2019-05-20 2024-04-16 平安科技(深圳)有限公司 Fundus image lesion labeling method, device and medium based on feature visualization
CN112017157A (en) * 2020-07-21 2020-12-01 中国科学院西安光学精密机械研究所 Method for identifying damage point in optical element laser damage threshold test
CN112017157B (en) * 2020-07-21 2023-04-11 中国科学院西安光学精密机械研究所 Method for identifying damage point in optical element laser damage threshold test
WO2023285287A1 (en) * 2021-07-14 2023-01-19 Oncotech Ltd A computer implemented method for determining a probability of a disease in at least one image representative of an eye
CN116188810A (en) * 2023-04-25 2023-05-30 浙江一山智慧医疗研究有限公司 Method, device and application for extracting characteristics of optic disc
CN117495817A (en) * 2023-11-10 2024-02-02 佛山市禅一智能设备有限公司 Method and device for judging abnormal images of blood vessels under endoscope
CN117495817B (en) * 2023-11-10 2024-09-03 佛山市禅一智能设备有限公司 Method and device for judging abnormal images of blood vessels under endoscope

Similar Documents

Publication Publication Date Title
US10413180B1 (en) System and methods for automatic processing of digital retinal images in conjunction with an imaging device
Akbar et al. Automated techniques for blood vessels segmentation through fundus retinal images: A review
Cao et al. Hierarchical method for cataract grading based on retinal images using improved Haar wavelet
Niemeijer et al. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis
US11210789B2 (en) Diabetic retinopathy recognition system based on fundus image
WO2017020045A1 (en) System and methods for malarial retinopathy screening
Adal et al. An automated system for the detection and classification of retinal changes due to red lesions in longitudinal fundus images
EP3373798B1 (en) Method and system for classifying optic nerve head
Mariakakis et al. Biliscreen: smartphone-based scleral jaundice monitoring for liver and pancreatic disorders
Zhang et al. Detection of microaneurysms using multi-scale correlation coefficients
Kauppi et al. The diaretdb1 diabetic retinopathy database and evaluation protocol.
Mayya et al. Automated microaneurysms detection for early diagnosis of diabetic retinopathy: A Comprehensive review
Xiong et al. An approach to evaluate blurriness in retinal images with vitreous opacity for cataract diagnosis
Sánchez et al. Retinal image analysis to detect and quantify lesions associated with diabetic retinopathy
Navarro et al. Automatic detection of microaneurysms in diabetic retinopathy fundus images using the L* a* b color space
Kauppi Eye fundus image analysis for automatic detection of diabetic retinopathy
Joshi et al. Automated detection of malarial retinopathy in digital fundus images for improved diagnosis in Malawian children with clinically defined cerebral malaria
Dubey et al. Recent developments on computer aided systems for diagnosis of diabetic retinopathy: a review
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
Mookiah et al. Computer aided diagnosis of diabetic retinopathy using multi-resolution analysis and feature ranking frame work
US20240032784A1 (en) Integrated analysis of multiple spectral information for ophthalmology applications
Al-Saedi et al. Design and Implementation System to Measure the Impact of Diabetic Retinopathy Using Data Mining Techniques
Niemeijer Automatic detection of diabetic retinopathy in digital fundus photographs
Kurup et al. Automated malarial retinopathy detection using transfer learning and multi-camera retinal images
Preethi et al. Performance analysis of iris-based identification system based on exudates

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16831470

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16831470

Country of ref document: EP

Kind code of ref document: A1