CN109348732A - Image processing method and system - Google Patents
Image processing method and system Download PDFInfo
- Publication number
- CN109348732A CN109348732A CN201880001587.4A CN201880001587A CN109348732A CN 109348732 A CN109348732 A CN 109348732A CN 201880001587 A CN201880001587 A CN 201880001587A CN 109348732 A CN109348732 A CN 109348732A
- Authority
- CN
- China
- Prior art keywords
- user
- code
- images
- reference picture
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 70
- 230000002207 retinal effect Effects 0.000 claims description 178
- 238000013473 artificial intelligence Methods 0.000 claims description 52
- 210000002189 macula lutea Anatomy 0.000 claims description 40
- 208000032843 Hemorrhage Diseases 0.000 claims description 31
- 231100000319 bleeding Toxicity 0.000 claims description 31
- 208000034158 bleeding Diseases 0.000 claims description 31
- 230000000740 bleeding effect Effects 0.000 claims description 31
- 206010064930 age-related macular degeneration Diseases 0.000 claims description 29
- 238000012549 training Methods 0.000 claims description 27
- 208000002177 Cataract Diseases 0.000 claims description 25
- 210000004204 blood vessel Anatomy 0.000 claims description 24
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 210000000416 exudates and transudate Anatomy 0.000 claims description 22
- 206010012689 Diabetic retinopathy Diseases 0.000 claims description 21
- 208000002780 macular degeneration Diseases 0.000 claims description 20
- 208000010412 Glaucoma Diseases 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 15
- 206010002329 Aneurysm Diseases 0.000 claims description 14
- 230000005540 biological transmission Effects 0.000 claims description 11
- 238000013527 convolutional neural network Methods 0.000 claims description 11
- 238000012706 support-vector machine Methods 0.000 claims description 10
- 208000008069 Geographic Atrophy Diseases 0.000 claims description 9
- 231100000241 scar Toxicity 0.000 claims description 8
- 238000013135 deep learning Methods 0.000 claims description 7
- 238000010801 machine learning Methods 0.000 claims description 7
- 208000005590 Choroidal Neovascularization Diseases 0.000 claims description 5
- 206010060823 Choroidal neovascularisation Diseases 0.000 claims description 5
- 208000003367 Hypopigmentation Diseases 0.000 claims description 5
- 231100000768 Toxicity label Toxicity 0.000 claims description 5
- 230000003425 hypopigmentation Effects 0.000 claims description 5
- 230000002093 peripheral effect Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000007637 random forest analysis Methods 0.000 claims description 5
- 210000001210 retinal vessel Anatomy 0.000 claims description 5
- 229920000742 Cotton Polymers 0.000 claims description 4
- 208000001344 Macular Edema Diseases 0.000 claims description 4
- 206010025415 Macular oedema Diseases 0.000 claims description 4
- 208000037111 Retinal Hemorrhage Diseases 0.000 claims description 4
- 206010038848 Retinal detachment Diseases 0.000 claims description 4
- 230000003902 lesion Effects 0.000 claims description 4
- 201000010230 macular retinal edema Diseases 0.000 claims description 4
- 239000003550 marker Substances 0.000 claims description 4
- 230000004264 retinal detachment Effects 0.000 claims description 4
- 241000950638 Symphysodon discus Species 0.000 claims description 3
- 208000034698 Vitreous haemorrhage Diseases 0.000 claims description 3
- 230000005856 abnormality Effects 0.000 claims description 3
- 230000008602 contraction Effects 0.000 claims description 3
- 230000001497 fibrovascular Effects 0.000 claims description 3
- HOQADATXFBOEGG-UHFFFAOYSA-N isofenphos Chemical compound CCOP(=S)(NC(C)C)OC1=CC=CC=C1C(=O)OC(C)C HOQADATXFBOEGG-UHFFFAOYSA-N 0.000 claims description 3
- 239000012528 membrane Substances 0.000 claims description 3
- 239000000049 pigment Substances 0.000 claims description 3
- 210000001747 pupil Anatomy 0.000 claims description 3
- 210000003583 retinal pigment epithelium Anatomy 0.000 claims description 3
- 210000003462 vein Anatomy 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 description 21
- 208000030533 eye disease Diseases 0.000 description 15
- 210000001525 retina Anatomy 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 201000010099 disease Diseases 0.000 description 10
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000011282 treatment Methods 0.000 description 10
- 238000007689 inspection Methods 0.000 description 9
- 206010012601 diabetes mellitus Diseases 0.000 description 7
- 230000004438 eyesight Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 201000002862 Angle-Closure Glaucoma Diseases 0.000 description 5
- 201000004569 Blindness Diseases 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 5
- 230000036541 health Effects 0.000 description 5
- 238000011068 loading method Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 230000006378 damage Effects 0.000 description 4
- 239000007788 liquid Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 239000008280 blood Substances 0.000 description 3
- 210000004369 blood Anatomy 0.000 description 3
- 238000013145 classification model Methods 0.000 description 3
- 210000004087 cornea Anatomy 0.000 description 3
- 210000000695 crystalline len Anatomy 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 210000000936 intestine Anatomy 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000004393 visual impairment Effects 0.000 description 3
- 206010018325 Congenital glaucomas Diseases 0.000 description 2
- 206010012565 Developmental glaucoma Diseases 0.000 description 2
- 206010030348 Open-Angle Glaucoma Diseases 0.000 description 2
- 206010047571 Visual impairment Diseases 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- HVYWMOMLDIMFJA-DPAQBDIFSA-N cholesterol Chemical compound C1C=C2C[C@@H](O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2 HVYWMOMLDIMFJA-DPAQBDIFSA-N 0.000 description 2
- 238000002790 cross-validation Methods 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000003102 growth factor Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011328 necessary treatment Methods 0.000 description 2
- 210000001328 optic nerve Anatomy 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000750 progressive effect Effects 0.000 description 2
- 230000035755 proliferation Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000008961 swelling Effects 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 208000029257 vision disease Diseases 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- NLZUEZXRPGMBCV-UHFFFAOYSA-N Butylhydroxytoluene Chemical compound CC1=CC(C(C)(C)C)=C(O)C(C(C)(C)C)=C1 NLZUEZXRPGMBCV-UHFFFAOYSA-N 0.000 description 1
- 206010012688 Diabetic retinal oedema Diseases 0.000 description 1
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 206010020772 Hypertension Diseases 0.000 description 1
- 206010047513 Vision blurred Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000002491 angiogenic effect Effects 0.000 description 1
- 210000001742 aqueous humor Anatomy 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000036770 blood supply Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 235000012000 cholesterol Nutrition 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 201000011190 diabetic macular edema Diseases 0.000 description 1
- 208000016097 disease of metabolism Diseases 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 201000005577 familial hyperlipidemia Diseases 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 208000030159 metabolic disease Diseases 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000007873 sieving Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000013268 sustained release Methods 0.000 description 1
- 239000012730 sustained-release form Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Eye Examination Apparatus (AREA)
Abstract
The present invention relates to image processing method and systems.The method includes receiving initial user file from user terminal, the initial user file includes user data and user images;The initial user file is loaded into server, the server is stored with reference picture and computation model, the reference picture that the reference picture includes the reference picture that multiple classification codes are 1 and multiple classification codes are 2;Using the computation model, the user images are compared with the reference picture, the classification code of the user images is determined as one of 1 and 2;By the classification code deposit initial user file of user images to generate update user file;And it sends and updates user file to user terminal.
Description
Image processing method and systems technology field
[0001] the present invention relates to a kind of image processing method and systems.Particularly, the present invention relates to a kind of retinal fundus images classification method and systems.Background technique
It [0002] include such as cataract (47.9 %), glaucoma (12.3%), age-related macular degeneration (A intestines) (8.7%), cornea opacity (5.1%), diabetic retinopathy (4.8%) the main reason for disease of eye, blindness and inpairment of vision.The prevalence of these diseases is rising, partially due to the life style of sitting, and aging of population, bring the related metabolic disease of many and above-mentioned eye disease (such as diabetes, hypertension, high cholesterol (hyperlipemia) and disease related with the age.If detecting and treating in time, it is most of preventible for being developed by the above disease of eye as blindness.
[0003] cataract is a kind of disease of eye, since the opacification of patient's intra-ocular lens causes blurred vision or mist as eyesight.Cataract occurs mainly in the eyes of the elderly, because of it is believed that the deterioration for the protein fibre of cataract being formed as in crystalline lens.This leads to the formation of block, generates the cloud sector in crystalline lens.If not obtaining early treatment, cataract can lead to permanent vision loss.
[0004] according to tune Check in 2010,4,5,900,000 people of the whole world is due to their eyesight of cataract affected.In 4,590 Wan Renzhong, 1,080 ten thousand people is since cataract is blinded.Only in Asia, 3,150 ten thousand people suffer from cataract and 7,270,000 people since cataract loses eyesight.In 2010, there are 2,500,000 people with cataract in China, and it is expected that increase by 400,000 people every year.This is mainly due to 65 years old or more a large amount of the elderly's populations.
[0005] glaucoma refers to one group of eye disease, refers to the slow deterioration positioned at ocular region optic nerve.This is frequently due to the accumulation of eye fluid pressure.This leads to the obstruction of the circulation of the liquid (would generally flow out naturally from eyes) of referred to as aqueous humor.This blocking may be to injure due to inherent cause or to the chemistry of eyes and occur.
[0006] glaucoma has several types: open-angle glaucoma, angle-closure glaucoma and congenital glaucoma.[0007] open-angle glaucoma (0AG) is the common type of glaucoma, occur when the angle (position that iris and cornea merge) of eyes is normal, however, damage to ocular drainage ability causes liquid to gather, it causes the increase of internal pressure, also results in optic nerve lesion.
[0008] on the other hand, angle-closure glaucoma (ACG) is uncommon.This occurs to interrupt the drainage of liquid when being excessively narrow due to the angle between iris and cornea when the unexpected increase of inside of eye pressure.
[0009] congenital glaucoma is the unusual of glaucoma, due to patient's foetal period ocular drainage pipe it is bad or incomplete development and cause.
[0010] from its title, it is apparent that diabetic retinopathy (DR) is situation about uniquely only occurring in diabetic.This disease causes the progressive damage to the blood vessel in retina over time.This causes the tiny blood vessels leakage liquid in retina or bleeds mainly due to the sugar (it is present in the blood of diabetic) of a large amount, and leads to the progressive damage to ocular vascular.This eyesight for causing visual impairment such as muddy or fuzzy.In the late stage of disease, new vessels, which are formed, to be occurred, and further damages retina cell.It may cause blindness if be not treated in time.
[0011] progression of disease of diabetic retinopathy is classified as 4 different stages: slight, medium, severe and proliferation.(slight) in the first stage has the swelling of the tiny blood vessels in retina.Blood vessel in second stage (medium), retina continues swelling, destroys its structure, and blood vessel is caused to lose the ability that they transport blood.During at this stage, it causes the variation of the shape of retina, may cause diabetic macular edema (beautiful E).In phase III (severe), most of blood vessel is blocked, and the blood for being supplied to retina is caused to reduce.When retina is deprived of blood supply, growth factor release is formed to new vessels.In final stage (proliferation), the sustained release of growth factor, so that new fragile angiogenic growth, causes to be easy bleeding and leakage, this will eventually lead to retinal detachment.
[0012] according to a clinical research in 2010, being estimated to be in the whole world is influenced [9,14] by diabetes more than 300,000,000 7,1,000,000 adults.Only in Asia, being estimated to be 200,000,000 2,2,600,000 adults by diabetes is influenced [8,9,14].The patient populations of the country such as India and China are most, have 6,5,100,000 and 100,000,000 1,390 ten thousand people to be influenced by diabetes respectively.In addition, 300,000,000 7, have in 1,000,000 adults 100000000 2,6,600,000 with DR [9,14].Only in China, 5,615 ten thousand people suffer from DR [9,14].This is global epidemic disease, and patient populations increase year by year.
[0013] in 2010, whole world estimation shares 6,0,400,000 people with glaucoma, wherein 4,470 ten thousand suffer from 0AG and 1,5,700,000 suffer from ACG [5].In Asian countries, including China, India, Japan and Southeast Asia, there are 3,4,400,000 people with glaucoma, therein 2,0,900,000 people suffer from 0AG, and 1,350 ten thousand people suffers from ACG.
[0014] under the prior art, the timely or early detection of many disease of eye has proved to be difficult realization, especially with a large amount of people in the countryside, be dispersed in the vast developing countries in regions such as China, Russia and India.
[0015] in 2010, international ophthalmology council (Internat ional Counci l of Ophthalmology) is pointed out, shared only 3,2,000,000 oculists in the whole world.This is equivalent to every 6, ratio of 400 oculists to 1 million people.However, based on the forecasting research to the year two thousand twenty, Southeast Asia will then share 16. 3 hundred million populations, but only 10,000 3, the ratio of 300 oculists or 1 oculist to 120,000 2000 people.These statistics indicate that, more oculists are needed, especially in developing country.
[0016] it is the portion 2012 specific data according to international ophthalmology council below:
(data source:
Http:// www. icoph. org/ ophthalmologists-worldwide, html)
[0017] as can be seen from the above data, oculist's number of Indonesia and Thailand is about between 1000 to 1300, relative to the population more than 6,000 ten thousand, its oculist lacks very much, therefore, it is necessary to treat the crowd of disease of eye to face the higher risk unconditionally seen a doctor.
[0018] while the shortage of oculist, the population in rural area and remote districts also faces the difficulty for being difficult to obtain ophthalmology healthcare, because oculist focuses mostly in big city.This, which makes big city hospital and queuing see a doctor, becomes required in the oculist of big city hospital.
[0019] according to the statistics for coming from Singapore National Eye Center (SNEC), even doing only one third in the people of eyes inspection Check in diabetic and the timely of oculist being actually needed
It cures.This means that oculist takes a significant amount of time the patient for not needing emergency medical treatment actually with resource inspection Check, and time and resource originally can preferably be used in the patient that inspection Check really needs treatment.
[0020] user with healthy eye is allowed to save the unnecessary trip seen a doctor to oculist especially for the user in rural area or remote districts with more easily solution therefore, it is necessary to economical.Meanwhile having the user of latent disease of eye that can learn its ocular health in time, to make further sieve Check or treatment to eyes.Summary of the invention
[0021] one embodiment of the present of invention provides a kind of image processing method, which comprises receives initial user file from user terminal, the initial user file includes user data and user images;The initial user file is loaded into server, the server is stored with reference picture and computation model, the reference picture that the reference picture includes the reference picture that multiple classification codes are 1 and multiple classification codes are 2;Using the computation model, the user images are compared with the reference picture, the classification code of the user images is determined as one of 1 and 2;By the classification code deposit initial user file of user images to generate update user file;And it sends and updates user file to user terminal.
[0022] preferably, the update user file includes color mark, and the color mark includes and the corresponding Green Marker of classification code 1 and red-label corresponding with classification code 2.
[0023] preferably, if the classification code of user images is confirmed as 1, the method also includes by the first follow-up code deposit initial user file to generate the update user file.[0024] preferably, if the classification code of user images is confirmed as 2, the method also includes by the second follow-up code deposit initial user file to generate the update user file.
[0025] preferably, the method also includes having determined that the user images of classification code as reference pictures store into server.
[0026] preferably, the method also includes loading the reference picture in the server before receiving initial user file from user terminal, the computation model is constructed based on reference picture training artificial intelligence engine, and using the artificial intelligence engine.
[0027] preferably, the artificial intelligence engine includes the combination of at least one algorithm or algorithm in machine learning algorithm and deep learning algorithm.
[0028] preferably, the artificial intelligence engine includes at least one of support vector machines (SVM), gradient elevator (GBM), random forest and convolutional neural networks.
[0029] preferably, the method also includes based on the user images and the classification code of the determination training artificial intelligence engine.
[0030] preferably, the user images are the retinal fundus images of user, including at least 3000*2000 pixel, at least 45 degree of eyeground region, and the pixel resolution of at least 150dp i.
[0031] preferably, the user images are the retinal fundus images of user, wherein it further comprises judging that at least one of element is compared using following eye state that the user images, which are compared with the reference picture:
(a) the multiple retinal vessels presented in image;
(b) cup disc ratio is less than 0. 3;And
(c) lack at least one of following element:
(i) visible medium turbidity;
(ii) diabetic retinopathy indicator comprising at least one of the bleeding of trace sample, aneurysms and hard exudate;
(iii) CLBSflCAL OBSERVATION;
(iv) macular edema;
(v) exudate near macula lutea;
(vi) exudate on macula lutea;
(vii) laser scar;
(iii) cataract;
(ix) glaucoma;
(χ) diabetic retinopathy;With
(xi) age-related macular degeneration, it includes at least one of multiple big drusens, the geographic atrophy of marking area with hypopigmentation and choroidal neovascularization, wherein age-related macular degeneration be indicate atrophic, new vessels and at least one exudative;
Wherein eye state judges that at least one of element can be not as the judgement element of classification.
[0032] another embodiment of the present invention is a kind of image processing system, user terminal the system comprises: server and with the server communication connection, server is stored with reference picture and computation model, the reference picture that the reference picture includes the reference picture that multiple classification codes are 1 and multiple classification codes are 2;For user terminal for generating initial user file, the initial user file includes user data and user images.After receiving user file, server starts the computation model, the user images are compared with the reference picture, the classification code of the user images is determined as one of 1 and 2, the classification code of user images is stored in initial user file to generate update user file, and is sent to user terminal for user file is updated.
[0033] preferably, the update user file includes color mark, the color mark packet
Include and the corresponding Green Marker of classification code 1 and red-label corresponding with classification code 2.
[0034] preferably, the system also includes the artificial intelligence engines based on reference picture training, and the artificial intelligence engine is for constructing the computation model.
[0035] preferably, the artificial intelligence engine includes the combination of at least one algorithm or algorithm in machine learning algorithm and deep learning algorithm.
[0036] preferably, the artificial intelligence engine includes at least one of support vector machines (SVM), gradient elevator (GBM), random forest and convolutional neural networks.
[0037] classified by providing economic and convenient solution to user's retinal fundus images, the present invention has potentiality to substantially reduce the whole world in development and the preventible blindness and visual impairment of developed country.According to embodiments of the present invention, the user with healthy eye can save the unnecessary time seen a doctor to oculist and resource, and the user with potential disease of eye can learn its eye condition in time, to arrange to seek advice from oculist in time.Its limited time and resource can also be arranged in the user that inspection Check really needs medical treatment by oculist.Detailed description of the invention
[0038] the embodiment of the present invention is described in detail below by way of exemplary mode, and refers to attached drawing, in which:
[0039] Fig. 1 is the computation model building schematic diagram of image processing method according to embodiments of the present invention and system.
[0040] Fig. 2 is the retina eyeground of image processing method according to embodiments of the present invention and system
Image loads schematic diagram.
[0041] Fig. 3 is the retinal fundus images classification schematic diagram of image processing method according to embodiments of the present invention and system.
[0042] Fig. 4 is the computation model construction step flow chart of image processing method according to embodiments of the present invention and system.
[0043] Fig. 5 is the step flow chart of the retinal fundus images load of image processing method according to embodiments of the present invention and system.
[0044] Fig. 6 is the step flow chart of the retinal fundus images classification of image processing method according to embodiments of the present invention and system.[0045] Fig. 7 is the schematic diagram by communication path portal of image processing method according to embodiments of the present invention and system.
[0046] Fig. 8 A is the retinal fundus images of healthy eye.[0047] Fig. 8 B to 8E is the retinal fundus images with several disease of eye.
[0048] Fig. 9 and Figure 10 is the schematic diagram of image processing method according to embodiments of the present invention and system
[0049] Figure 11 is another embodiment image processing method and the retinal fundus images categorizing system schematic diagram of system according to the present invention.
[0050] Figure 12 is the computing model construction method flow chart of embodiment illustrated in fig. 11.
[0051] it is the retinal fundus images classification method flow chart of embodiment illustrated in fig. 11 that Figure 13 A, which is retinal fundus images loading method flow chart [0052] Figure 13 B of embodiment illustrated in fig. 11,.
[0053] Figure 14 A is the retinal fundus images example of healthy eye.
[0054] Figure 14 B to 14E is the retinal fundus images example with several disease of eye.
[0055] Figure 15 A is for representing classification code as 1 sign flag example, Figure 15 B be for represent classification code as 2 and subclassification code as 2-1 sign flag example, Figure 15 C be for represent classification code as 2 and subclassification code as 2-2 sign flag example.
Specific embodiment
[0056] in the disclosure, the description of point element and technical characteristic, or in certain figures the considerations of particular element number or use, or to reference thereon in corresponding description material, it can cover and be marked with identical, the equivalent or similar element and technical characteristic or reference number identified in another attached drawing or description material associated there.Although the aspect of the disclosure will be described in conjunction with embodiment presented herein, but it is to be understood that the specific descriptions of embodiment are not intended to be limited to the disclosure in these embodiments.On the contrary, the disclosure is intended to be covered on embodiment described herein and its substitution solution, modification and equivalent processes and system, in the scope of the present disclosure defined by the appended claims.In addition, in the following detailed description, detail is set forth in order to provide the thorough understanding to the disclosure.However, it will be by the personnel with ordinary skill of this field, i.e. those skilled in the art identify, the disclosure can repeat it will be appreciated by those skilled in the art that specific detail and/or have carry out in the case where multiple details of the combination of the aspect of specific embodiment.One
In a little situations, well known system, method, step and component are not described in detail.Unless otherwise indicated, term used herein " include (compris ing);;, " include (compri se),, " include (including) ", " including (include) ", and their grammatical variants, it is intended to represent the language of " open " or " inclusive ", so that the method and system of its definition include the element limited in claim, but also allow to include element that is additional, not limiting.Term " transmission ", " reception " or " load " used in herein and their grammatical variants, the connection for being intended to represent two objects, element or device (or is directly linked together, or they are indirectly connected together, electrical connection or wirelessly, pass through other components (such as router, internet, network and server) connection.
[0057] as shown in Fig. 1, image processing method according to an embodiment of the present invention and the computation model of system building 100, including the retinal fundus images 101 of expert's grade are loaded into the database 102 for saving expert's grade retinal fundus images, training using the retinal fundus images 101 of expert's grade to artificial intelligence (AI) engine 103 constructs model 104 using AI engine 103.
[0058] Fig. 2 is the schematic diagram of the retinal fundus images load 200 of image processing method according to an embodiment of the present invention and system.The signal of retinal fundus images load 200 illustrates: Portable fundus camera shoots user's retinal fundus images 201, and it combines user's retinal fundus images with user data, to create the initial user file with user's retinal fundus images and user data 202, initial user file 203 is transmitted to the handset emissions tower for being connected to network 204 with mobile phone, initial user file is received in network server 205, and loads initial user file to the server database 206 for storing user file.
[0059] Fig. 3 is the schematic diagram of the retinal fundus images classification 300 of image processing method according to an embodiment of the present invention and system.The signal of retinal fundus images classification 300 illustrates: saving the database 206 of initial user file, initial user file be using computation model 104 be stored in database 206, with classification code be 1 or 2 multiple reference pictures be compared, with
The classification code of the user images is determined as one of 1 and 2, wherein 301 indicate the user file for being determined as 1 user images including classification code, 302 indicate the user file for being determined as 2 user images including classification code.After the classification code of user images determines, the classification code of user images is stored into initial user file, to generate update user file.It updates user file and is then sent to user terminal, send the classification code of user images and relevant information to user.
[0060] Fig. 4 is the step flow chart of the computation model building 400 of image processing method according to an embodiment of the present invention and system.The flow chart of the step of computation model building 400 is shown below step:
401 load the retinal fundus images of multiple expert classifications into database.
402 training AI engines are operated with the retinal fundus images to expert classification.
403 use AI engine, and the retinal fundus images based on expert classification construct computation model.
404 do further training to AI engine for 1 and 2 retinal fundus images based on the classification code classified with oculist.
[0061] Fig. 5 is the step flow chart of the retinal fundus images load 500 of image processing method according to an embodiment of the present invention and system.The step process of retinal fundus images load 500 shows step:
501 indicate the retinal fundus images with Portable fundus camera shooting user's retina in the zone.
502 indicate the initial user file that creation has user data and user regards retinal fundus images.
503 indicate to transmit initial user file via national portal or world-class portal, by wireless data transmitter to server.
504 indicate to receive initial user file by server.
505 indicate to load initial user file into database.
[0062] Fig. 6 is the step flow chart of the retinal fundus images classification 600 of image processing method according to an embodiment of the present invention and system.The step process of classification 600 illustrates step:
601 indicate that user's retinal fundus images are compared with reference picture using the computation models that are created by AI engine, by the classification code of user images be determined as 1 and 2 it
602 indicate the classification code deposit initial user file by user images to generate update user file.
603 indicate to be determined as classification code 1 user file, and this method and system generate the first follow-up code, and update user file is added, and for reminding user that its retinal fundus images is periodically sent to system, carry out subsequent classification.It is determined as 2 user file for classification code, this method and system generate the second follow-up code, and update user file is added, for suggesting that the user seeks advice from oculist, to make further sieve Check and necessary treatment.
[0063] Fig. 7 is the schematic diagram of the communication path 700 by portal 702 of image processing method according to an embodiment of the present invention and system.As shown in fig. 7, the communication path 700 for passing through portal includes the communication to portal by laptop 704, smart phone 706, tablet computer 708, computer 710 and optical centre 712 and the communication from portal.Optical centre receives the communication from the user in country 714 and village 716.[0064] one embodiment of the present of invention provides a kind of classification method of retinal fundus images, for user's retinal fundus images to be carried out classification processing, to determine whether the user has disease of eye risk.It is described to determine the reference retinal fundus images of classification into server database the following steps are included: (a) loads multiple experts;(b) training AI engine is to determine that the reference retinal fundus images of classification operate to expert;(c) AI engine is used, determine that the reference retinal fundus images of classification construct computation model based on expert, each with reference to one classification code of retinal fundus images to assign, wherein classification code 1 indicates that corresponding retinal fundus images are classified as " normal " or " low eye illness risk ";Such as " normal " class retinal fundus images as shown in Figure 8 A.Classification code 2 indicates
Corresponding retinal fundus images are classified as " exception " or " high eye illness risk ", including the retinal fundus images with " diabetic retinopathy " characteristics of image for example as shown in Figure 8 B, retinal fundus images (wherein 802 indicating that cup disc ratio (CDR) is greater than or equal to 0. 75) with " glaucoma " characteristics of image as shown in Figure 8 C, the retinal fundus images with " cataract " characteristics of image as in fig. 8d, and with the retinal fundus images of " age-related macular degeneration (AMD) " characteristics of image as shown in Fig. 8 E;(d) initial user file is received from network server, the initial user file includes user data and user images;(e) initial user file is loaded into server database;(f) computation model is used, the reference retinal fundus images in the retinal fundus images and server in initial user file are compared, the classification code of the user images is determined as one of 1 and 2;And (g) the classification code deposit initial user file of user images to record the classification of user's retinal fundus images, and sends to generate update user file and updates user file to user terminal.
[0065] algorithm of AI engine includes the combination of at least one algorithm or algorithm for selecting from machine learning algorithm and deep learning algorithm.The algorithm of AI engine includes at least one of support vector machines (SVM), gradient elevator (GBM), random forest and convolutional neural networks.User file includes user data and unassorted user's retinal fundus images.
[0066] determine in step that the number for being stored in the retinal fundus images of initial user file can be 2 to 4 in comparison analysis and classification code.
[0067] alternative solution according to first embodiment, the user images and the reference picture are compared, including judging that at least one of element is compared: the multiple retinal vessels (a) presented in image based on following eye state;(b) cup disc ratio is less than 0. 3;And (c) lack at least one of the following: (i) visible medium turbidity;() diabetic retinopathy indicator includes at least one of the bleeding of trace sample, aneurysms and hard exudate;(iii) CLBSflCAL OBSERVATION;(iv) macular edema;(V) exudate near macula lutea;(vi) in Huang
Exudate on spot;(vii) laser scar;(viii) cataract;(ix) glaucoma;(X) diabetic retinopathy;(xi) age-related macular degeneration, including multiple big drusens, at least one of geographic atrophy and choroidal neovascularization of marking area with hypopigmentation, wherein age-related macular degeneration be indicate atrophic, new vessels and at least one exudative.Eye state judges that at least one of element can be left out.
[0068] one or more eye states are excluded and judge that element can make the assorting process of retinal fundus images be suitble to give the demand and its available resources of country.[0069] another alternative solution according to first embodiment, (i) expert determines that the retinal fundus images of classification can be oculist and determine classification retinal fundus images;(ii) method may further include, and first or second follow-up code is added in updating file.It is interior at the appointed time can to indicate suggestion user corresponding to the user's retinal fundus images for being classified as 1 for first follow-up code, such as in 6 to 12 months, its user's retinal fundus images is reached system, classification is compared.Second follow-up code corresponding to the user's retinal fundus images for being classified as 2 can indicate that the user is arranged to submit the user's retinal fundus images updated, to be verified, and suggest that user seeks advice from oculist;(iii) method may further include, and the update user file that first or second follow-up code is added is sent to user terminal;(iv) method may further include, based on classification code be 1 at least one user file retinal fundus images oculist classification, training AI engine;(V) method may further include, based on classification code be 2 at least one user file retinal fundus images oculist classification, training AI engine;(vi) each retinal fundus images may include at least 3000*2000 pixel, at least 45 degree of eyeground region, and at least pixel resolution of 150dpi;(vii) the step of retinal fundus images of at least one user file can be shot with Portable fundus camera, be received from server may include the transmission via at least one user file of the wireless data transmitter that may be connected to Portable fundus camera;(viii) retinal fundus images of at least one user file can be shot with Portable fundus camera, and from network server receive the step of include via the transmission of at least one user file of Portable fundus camera,
Wherein fundus camera may include wireless data transmitter;(ix) network server can be at least one national portal of trustship and at least one whole world portal;Or (X) at least one user file can be and be uploaded to network server by least one portable use.What communication was also possible to realize by the wired connection of such as phone, cable, DSL and optical fiber.
[0070] for the user file for the retinal fundus images for being 1 including classification code, the first follow-up code can indicate to remind the user within the predetermined time, such as within 6 to 12 months, user's retinal fundus images are shot again and are transmitted to server and are compared, to determine the classification code of the user's retinal fundus images shot again.
[0071] for the user file for the user's retinal fundus images for being 2 comprising classification code, the second follow-up code can indicate to suggest that the user meets oculist, it may further include and medical facilities are reserved by network server interface, to arrange and confirm day of appointment.[0072] image processing method of the embodiment of the present invention further includes the retinal fundus images based at least one user file that classification code is 2, training AI engine.Expert can be conveyed to for classifying by being classified to improper retinal fundus images by system.Once being classified, the retinal fundus images of expert classification can be used to further train AI engine;(vi) each retinal fundus images may include at least pixel of 3000*2000, at least 45 degree of eyeground region, and at least pixel resolution of 150dpi;The retinal fundus images of (vi i) at least one user file can be shot with Portable fundus camera, and the step of receiving from network server may include the transmission via at least one user file of the wireless data transmitter that may be connected to Portable fundus camera;The retinal fundus images of (vi i i) at least one user file can be shot with Portable fundus camera, and the step of receiving from network server may include via the transmission of at least one user file of Portable fundus camera, wherein fundus camera may include wireless data transmitter;(ix) network server can be at least one national portal of trustship and at least one whole world portal;Or (X) at least one user file can be uploaded to network service via at least one portable use
Device.
[0073] pass through the wireless data transmitter (such as mobile phone) of connection Portable fundus camera, user's retinal fundus images and user data can be sent to the laboratory of data center or trustship AI engine 103 and model 104, and quickly classify.Such as arrange the subsequent treatment that routine reclassifies or clinic oculist reserves that can be recommended from the classification of the retinal fundus images of user.In this way, Portable fundus camera shooting user's retinal fundus images can be used in rural area or remote districts.User data can be inputted by portable use, then be transmitted on local handset data network.Portable fundus camera with wireless data transmitter and/or user data can be used for user data input and wireless data transmission.Pass through portable use, user can obtain the classification code and first or second follow-up code of retinal fundus images, to take corresponding action, such as referral public hospital and access health information service are obtained, content can be to be customized according to his case notes for including in his/her user file.[0074] server 205 can be used for upload user file with the multiple portals of trustship.Portal can be either global come tissue by country, area, language.At least one or more portal can be addressable via portable use.
[0075] second embodiment of the present invention is related to the image processing method run on supercomputing system (it can be realized in public or privately owned cloud or in dedicated enterprise calculation resource) and system, for the retinal fundus images of user to be classified.
[0076] as shown in Fig. 9 and 10, live in user terminal, such as the user in rural area 910 or small city 920, by disposed proximally camera 912,922 shooting user's retinal fundus images (Figure 10, step 1012), generate initial user file, and initial user file is sent to the server equipped with image processing system of the present invention by communication network 930, such as it is set to the system server 942 in big city 940, be compared, with obtain user's retinal fundus images classification code 1016,
1018 (Figure 10, steps 1014).Classification code 1016 is the classification of code " 1 ", indicates that user's retinal fundus images belong to " normal " or " low eye illness risk ";Classification code 1018 is the classification of code " 2 ", indicates that user's retinal fundus images belong to " exception " or " high eye illness risk ".This method and system may further include the first follow-up code 1026 of generation and the second follow-up code 1028, correspond respectively to classification code 1 and classification code 2.Classification code 1016,1018, first and second follow-up code 1026,1028 are stored respectively in the user file 1036,1038 of update, and are sent to user terminal 910,920.
[0077] image processing method provided in this embodiment and system are based on the reference picture added with the classification code through expert or oculist's identification, exploitation and training artificial intelligence (Art ificial Intelligence, " Α Γ ') engine; computation model is constructed using the Α Ι engine; and the retinal fundus images of user are compared and are predicted with the reference picture added with classification code using the computation model; to obtain the classification of user's retinal fundus images, and classification results are fed back into user.Retinal fundus images are classified as the first kind, that is, correspond to the user of classification code 1, be confirmed as belonging to low-risk disease of eye crowd, it may not necessarily see a doctor at present, can be after the regular period, such as 6 months to 12 months and then make customary inspection Check.Retinal fundus images are classified as the second class, that is, correspond to the user of classification code 2, be confirmed as belonging to the crowd of high risk disease of eye and/or related disease.The image processing method and system of the embodiment of the present invention, which will further comprise, generates first or second follow-up code, and first or second follow-up code deposit is updated user file and sent and updates user file to user terminal.
[0078] according to the present embodiment, added with the reference picture of classification code, Α Ι engine and computation model effectively can be identified and be classified to the corresponding retinal fundus images of principal disease relevant to eyes.This kind of disease includes diabetic retinopathy (DR), age-related macular degeneration (AMD), glaucoma and cataract etc..
[0079] according to the computation model for using the embodiment of the present invention, by user's retinal fundus images
It is analyzed with the comparison of reference picture, if the similar degree for being classified as the reference picture of the first kind in family retinal fundus images and reference picture is higher than the decision threshold in computation model, this user's retinal fundus images if is divided into same class, the i.e. first kind.If the similar degree for being classified as the reference picture of the second class in family retinal fundus images and reference picture is higher than the decision threshold in computation model, this user's retinal fundus images if, is divided into same class, i.e. the second class.
[0080] according to an embodiment of the invention, reference picture is manually screened by qualified oculist, according to examination as a result, each reference picture is classified as one of the first kind and the second class, and corresponding classification code is assigned one by one.
[0081] based on multiple reference pictures with classification code, training AI engine, user's retinal fundus images to be compared, obtains the classification code of user's retinal fundus images to construct computation model.[0082] it is stored with the server of multiple reference pictures with classification code and computation model, it can be with deep learning (Deep Learning, " DL ") or deep neural network (Deep Neural Network, " DNN ") form realize for supervised learning machine learning (Machine Learning, " ML ") or artificial intelligence (AI) frame process.Obtained DNN algorithm and computation model can be used for for user's retinal fundus images and the reference picture in server being compared, to obtain the classification code of user's retinal fundus images.
[0083] computation model may be implemented in but being not limited to, desktop level work station (being with or without GPU).The operating system (OS) used, including but not limited to, Windows, iOS, Android and the system based on Linux etc..Computation model can also be hosted in the service based on cloud provided by the platform of third-party vendor.Image processing system and/or platform can include but is not limited to, Nvidia CUDA or the platform as open source codes such as OpenGL, OpenCV and OpenCL.Computation model can choose a kind of high-level programming language and platform (including but not limited to Matlab, Python,
C ++ and the packaging of R etc. and these platforms) implement.
[0084] a kind of image processing method according to an embodiment of the present invention including accessing the database comprising digitized retinal fundus images, and stores information for handling.Original image is mapped to 3 tuples being made of 3 independent matrix, wherein a color in each matrix representative RGB (RGB).If desired, the image based on RGB can be converted or be reduced into gray level image.
[0085] image processing method according to an embodiment of the present invention further includes all images remolded in data set with same space dimension, although this is not absolute demand.Time quantum needed for the quantity of pixel in the width and height of image can be optimised for train classification models.According to the quality of image, it can be enhanced with application image, image noise reduction, image restores and deblurring, scales, translation, the image processing methods such as rotation and edge detection.The image of mapping will form data set, will be as input to train classification and computation model.[0086] these images can also be via including but not limited to principal component analysis (also referred to as
KarhuenLoeve transformation) and dynamic mode decompress etc. other transform methods be further processed, wherein matrix singular value decomposition is performed.By using converter technique, substitution and supplement DNN can be developed to be trained on the image that these are converted, with associated with the beautiful model of main D, to be compared retinal fundus images, be analyzed and Accurate classification.
[0087] framework of disaggregated model is used, but is not limited to using convolutional neural networks (CNN).Nominal data collection is input in CNN framework, using the function train classification models of deduction, to predict new not meet image.The accuracy of CNN framework depends on series of parameters, selection, loss function, loss percentage, the number in period (epoch) of number of nodes, activation primitive on each layer etc..
[0088] Cross-Validation technique can be rolled over k further enhance classification and computation model.Other
Possible statistical technique can be carried out to improve the accuracy of disaggregated model, be not limited solely to above-mentioned technology.K folding Cross-Validation technique is the model verifying of the statistic property for the disaggregated model that assessment was trained.Nominal data collection is divided into training dataset and test data set with different percentage weights.[0089] the possible example of disaggregated model is described below to illustrate this process:
(1) image and its label after digitizing are mapped to data set.The scaling of image can be by calculating each red first, and the average value and standard deviation of the respective pixel density in green and blue channel execute.As a result 3 tuples are stored in, red, green and blue channel average value is corresponded respectively to.Image in data set is scaled by subtracting with red, the corresponding average value of each of green and blue channel and divided by red, the corresponding standard deviation of each of green and blue channel.
(2) such as R high level language can be used in the beautiful framework of C, and has the help of the encapsulation of such as Keras.CNN can be designed as with the certain amount of layer being made of node interconnected.Link in different functions between each node is defined by one by the function that weight and deviation form.The activation primitive of such as RELU is normally used for the weight and deviation of renewal function.Pond function is added to extract the subset on CNN layer, this may not be required.The loss percentage of such as 20 % is also added after CiW layers each.In the last layer, activation primitive S0FTMAX is used.
(3) disaggregated model undergoes multiple periods (epoch) to update its accuracy.Optimizer in a model is not limited only to ADAM, and there are also other optimizers, RMSPR0P etc..At each period (epoch), nominal data collection can be divided into training dataset and test data set, be not limited to 4: 1 segmentation.The ratio further can be subdivided into other ratios for being considered being most suitable for train classification models.
[0090] the possible verification step for the disaggregated model trained is described below to illustrate the process.
(1) it completes to disaggregated model after training, not used one group in training or test new unseen retinal fundus images is presented to the model trained.New unseen retinal fundus images are manually identified by qualified oculist, are determined classification code and are marked.
(2) probability of the ocular health of user is obtained based on retinal fundus images.It is by qualified oculist in next step by sieving the unseen image of Check and its respectively the classification (first kind or the second class) that marks verifies the probability of generation.
(3) further verification step can be applied, to identify the main potential disease of eye for including but is not limited to DR, A intestines, glaucoma and cataract etc..Qualified oculist is likely difficult to obtain all nuances of inpairment of vision, and draws a conclusion to the ocular health of user.The computation model trained can join the probability correlation that the conclusion that qualified oculist obtains is generated with the classified calculating model trained.
[0091] retinal fundus images classification and the analysis report of concise readability can be generated in the automation process of artificial intelligence portal, it will be sent to the user from Peripheral region, to inform the classification code and follow-up code of its retinal fundus images of user, and the information according to documented by classification code and follow-up code, whether which, which needs to seek advice from oculist, is further inspection Check.Classification report is computed model and compares the retinal fundus images classification code and color mark that is corresponding, facilitating identification that analysis obtains.It is, for example, possible to use Green Markers and/or (-) to represent classification code 1, indicates that the retinal fundus images of the user are classified as the first kind;Classification code 2 is represented using red-label and/or (+), indicates that the retinal fundus images of the user are classified as the second class.
[0092] example as described above is reported as follows:
The classification color mark symbol report meaning first kind (green) (-) user will not be proposed consulting oculist and make further inspection Check
Second class (red) (+) user will be proposed consulting oculist
Make further inspection Check
[0093] according to another embodiment of the present invention the computation model building of image processing method and system may include: that the retinal fundus images of expert's grade are loaded into the database for saving expert's grade retinal fundus images, training using the retinal fundus images of expert's grade to AI engine constructs model using AI engine.
[0094] according to another embodiment of the present invention the retinal fundus images loading system of image processing method and system may include: (i) Portable fundus camera, for shooting user's retinal fundus images, and combine user's retinal fundus images with user data, to create the initial user file with user's retinal fundus images and user data;(i i) mobile phone is used for transmission initial user file to the handset emissions tower for being connected to network;(i i i) network server for receiving initial user file, and loads initial user file to the server database for storing user file.
[0095] Figure 11 is the schematic diagram of the retinal fundus images categorizing system 1100 of image processing method according to another embodiment of the present invention and system.
[0096] retinal fundus images categorizing system 1100 may include: server database 1106, wherein it is stored with computation model 1104 and reference picture, the reference picture that the reference picture includes the reference picture that multiple classification codes are 1 and multiple classification codes are 2, after receiving initial user file (including user data and user images), server starts the computation model 1104, by the user
Image be stored in database 1106, with classification code be 1 or 2 multiple reference pictures be compared, the classification code of the user images is determined as one of 1 and 2.The reference picture that the reference picture that classification code is 2 includes the reference picture that multiple subclassification codes are 2-1 and multiple subclassification codes are 2-2.Further, server starts the computation model 1104, the user images and reference picture that multiple subclassification codes the are 2-1 and reference picture that multiple subclassification codes are 2-2 that the classification code is determined as 2 is compared, the subclassification code that classification code is determined as 2 user images is further determined as one of 2-1 and 2-2.Wherein 1111 indicate the user file for being determined as 1 user images including classification code, 1112 indicate the user file for being determined as 2 user images including classification code, 1121 indicate the user file for being determined as the user images of 2-1 including subclassification code, and 1122 indicate the user file for being determined as the user images of 2-2 including subclassification code.After the classification code and subclassification code of user images determine, the classification code and subclassification code of user images are stored into initial user file, to generate update user file.It updates user file and is then sent to user terminal, send the classification code of user images, subclassification code and relevant information to user terminal.[0097] preferably, Figure 12 is the step flow chart according to the computing model construction method 1200 of the image processing method and system of embodiment illustrated in fig. 11.Computing model construction method 1200 includes:
The retinal fundus images of multiple expert classifications are loaded into database, as shown in picture frame 1202;
Training AI engine is operated with the retinal fundus images to expert classification, as shown in picture frame 1204;
Using AI engine, the retinal fundus images based on expert classification construct computation model, as shown in picture frame 1206;
The retinal fundus images for being 1 and 2 based on the classification code classified with oculist do further training to AI engine, as shown in picture frame 1208;
Further training is done to AI engine based on the retinal fundus images that the subclassification code classified with oculist is 2-1 and 2-2, as shown in picture frame 1210.
[0098] preferably, Figure 13 A is the step flow chart according to the retinal fundus images loading method 1300 of the image processing method and system of embodiment illustrated in fig. 11.Retinal fundus images loading method 1300 may include:
The retinal fundus images that user's retina is shot with Portable fundus camera in the zone, as shown in picture frame 1302;
The initial user file with user data and user's retinal fundus images is created, as shown in picture frame 1304;
Initial user file is transmitted via national portal or world-class portal, by wireless data transmitter to server, as shown in picture frame 1306;
Initial user file is received by server, as shown in picture frame 1308;
Initial user file is loaded into database, as shown in picture frame 1310.
[0099] Figure 13 B is the step flow chart according to the retinal fundus images classification method 1350 of the image processing method and system of embodiment illustrated in fig. 11.Classification method 1350 includes:
Using the computation model created by AI engine by user's retinal fundus images and multiple classification codes be 1 reference picture and multiple classification codes be 2 reference picture be compared, the classification code of user images is determined as one of 1 and 2, as shown in picture frame 1352;
User's retinal fundus images and reference picture that multiple subclassification codes the are 2-1 and reference picture that multiple subclassification codes are 2-2 that classification code is determined as 2 is compared using the computation model created by AI engine, it is determined as one of 2-1 and 2-2 so that classification code to be determined as to the subclassification code of 2 user's retinal fundus images, as shown in picture frame 1354;
By the classification code of user images and subclassification code deposit initial user file to generate update user file, as shown in picture frame 1356;[0100] classification method 1350 may further include:
It is determined as 1 user file for classification code, this method and system generate third follow-up code, and update user file is added, for reminding user periodically by its retina eyeground
Image is sent to system, progress subsequent classification (such as suggest user in 6 to 12 months, user's retinal fundus images are shot again and are transmitted to server is compared, to determine the classification code of the user's retinal fundus images shot again), as shown in picture frame 1358;It is determined as the user file of 2-1 for subclassification code, this method and system generate the 4th follow-up code, and update user file is added, for suggesting that the user seeks advice from oculist, it but is not urgent, or for suggesting that medical institutions/oculist does not need to give user's emergency medical treatment (that is: non-emergent situation) immediately.It is determined as the user file of 2-2 for subclassification code, this method and system generate the 5th follow-up code, and update user file is added, for suggesting that the user seeks advice from oculist or for suggesting that medical institutions/oculist gives user's emergency medical treatment to make further to sieve Check and necessary treatment (that is: urgent situation), as shown in picture frame 1360 immediately immediately.
[0101] specifically, image processing method and the communication path by portal of system may include the communication to portal by laptop, smart phone, tablet computer, computer and optical centre and the communication from portal.Optical centre receives the communication from the user in country and village.
[0102] specifically, classification code 1 can indicate that corresponding retinal fundus images are classified as " normal " or " low eye illness risk ", such as " normal " class retinal fundus images example as shown in Figure 14 A.Classification code 2 can indicate that corresponding retinal fundus images are classified as " exception " or " high eye illness risk ".Subclassification code 2-1 can indicate that corresponding retinal fundus images are classified as " abnormal but non-emergent situation ".Subclassification code 2-2 can indicate that corresponding retinal fundus images are classified as " abnormal and urgent situation ", including the retinal fundus images with " cataract " characteristics of image for example as shown in Figure 14B, retinal fundus images with " diabetic retinopathy " characteristics of image as shown in Figure 14 C, the retinal fundus images with " glaucoma " characteristics of image as shown in fig. 14d, and with the retinal fundus images of " age-related macular degeneration (AMD) " characteristics of image as shown in Figure 14 E-14F.Wherein 1402 expression preretinal hemorrhage in Figure 14 C, the 1404 and 1406 hard exudates of expression, 1408 expression cotton-wool patches, 1410
Bleedings are indicated with 1412, and 1414 indicate the cup disc ratio (cup disc ratio is greater than or equal to 0.75) increased in Figure 14 D, and 1416 indicate geographic atrophy (advanced stage A intestines) in Figure 14 E, and 1418 indicate drusens in Figure 14 F.[0103] according to Figure 11-13B illustrated embodiment, user images and reference picture are compared so that the classification code of user images is determined as one of 1 and 2, including judging that at least one of element is compared: the multiple retinal vessels (a) presented in image based on following eye state;(b) cup disc ratio is less than 0.3;And (c) lack at least one of the following: (i) visible medium turbidity;() diabetic retinopathy indicator includes at least one of the bleeding of trace sample, aneurysms and hard exudate;(iii) CLBSflCAL OBSERVATION;(iv) in the macula lutea moon in water;(v) exudate near macula lutea;(vi) exudate on macula lutea;(vii) laser scar;(viii) cataract;(ix) glaucoma;(X) diabetic retinopathy;(xi) age-related macular degeneration, including multiple big drusens, at least one of geographic atrophy and choroidal neovascularization of marking area with hypopigmentation, wherein age-related macular degeneration be indicate atrophic, new vessels and at least one exudative.Eye state judges that at least one of element can be left out.It excludes one or more eye states and judges that element can make the assorting process of retinal fundus images be suitble to give the demand and its available resources of country.
[0104] according to Figure 11-13B illustrated embodiment, it further comprises judging that at least one of element is compared using following eye state that the user images that wherein classification code is determined as 2 are compared with the reference picture that subclassification code is 2-1:
(a-i) aneurysms/trace sample bleeding;
(a-ii) the hard exudate not in macula lutea;
(b-i) slight fine and close cataract (macula lutea and blood vessel are visible);
(c-i) the peripheral region far from macula lutea (other than 3 from macula lutea center disc diameters) drusen (hard or soft) presence;And
(c-ii) pigment.
[0105] it further comprises judging that at least one of element is compared using following eye state that classification code is determined as 2 user images is compared with the reference picture that the subclassification code is 2-2:
(a-i) more than three trace sample bleeding;
(a-ii) flame bleeding;
(a-iii) with the cotton-wool patches of bleeding and aneurysms;
(a-iv) with the hard exudate of aneurysms and trace sample bleeding in macula lutea;
(a-v) vein beading sample changes;
(a-vi) 2 or more bleedings;
(a-vii) intraretinal microvascular abnormality;
(a-viii) venous loop;
(a-ix) presence of the new blood vessel on disk or the new blood vessel in any other place;
(a-x) retinal detachment;
(a-xi) preretinal hemorrhage;
(a-xii) vitreous hemorrhage;
(a-xiii) fibroplasia;
(b-i) cataract for obscuring macula lutea and blood vessel is partially or completely fine and close;
(b-ii) macula lutea and blood vessel are invisible;
(c-i) presence of the drusen within macula lutea (within 2 from macula lutea center disc diameters) (hard or soft);
(c-ii) geographic atrophy;
(c-iii) subretinal fibrous scar;
(c-iv) retinal pigment epithelium detachment;
(c-v) lesion (subretinal bleeding) of subretinal fibrovascular;
(c-vi) choroidal neovascular membranes;
(c-vii) exudative age-related macular degeneration;
(d-i) cup disc ratio of any eyes is greater than or equal to 0.3;
(d-ii) asymmetry of disk is greater than or equal to 0.2;
(d-iii) disk bleeding;
(d-iv) appearance of any fluting or edge thinning;
(e-i) medium opacity degree;
(e-ii) fuzzy macula lutea (its pupil that may cause contraction);
(e-iii) blood vessel caused by under-exposed or overexposure and macula lutea it is invisible;And
(e-iv) the insufficient and/or wrong positioning of the focus of discus nervi optici and macula lutea.[0106] alternatively, (i) expert determines that the retinal fundus images of classification can be oculist and determine classification retinal fundus images;(ii) method may further include, and the update user file that the follow-up code of third, the 4th or the 5th is added is sent to user terminal;(iii) method may further include, based on classification code be 1 at least one user file retinal fundus images oculist classification, training AI engine;(iv) method may further include, based on classification code be 2 at least one user file retinal fundus images oculist classification, training AI engine;(v) method may further include, the classification of the oculist of the retinal fundus images based at least one user file that subclassification code is 2-1, training AI engine;(vi) method may further include, the classification of the oculist of the retinal fundus images based at least one user file that subclassification code is 2-2, training AI engine;(vii) each retinal fundus images may include at least 3000*2000 pixel, at least 45 degree of eyeground region, and at least pixel resolution of 150dpi;(viii) the step of retinal fundus images of at least one user file can be shot with Portable fundus camera, be received from server may include the transmission via at least one user file of the wireless data transmitter that may be connected to Portable fundus camera;(ix) retinal fundus images of at least one user file can be shot with Portable fundus camera, and the step of receiving from network server includes via the transmission of at least one user file of Portable fundus camera, and wherein fundus camera may include wireless data transmitter;(X) network server can be at least one national portal of trustship and at least one whole world portal;Or (xi) at least one user file can be it is logical
It crosses at least one portable use and is uploaded to network server.What communication was also possible to realize by the wired connection of such as phone, cable, DSL and optical fiber.
[0107] pass through the wireless data transmitter (such as mobile phone) of connection Portable fundus camera, user's retinal fundus images and user data can be sent to the laboratory of data center or trustship AI engine and computation model 1104, and quickly classify.Such as arrange the subsequent treatment (urgent or non-emergent situation) that routine reclassifies or clinic oculist reserves that can be recommended from the classification of the retinal fundus images of user.In this way, Portable fundus camera shooting user's retinal fundus images can be used in rural area or remote districts.User data can be inputted by portable use, then be transmitted on local handset data network.Portable fundus camera with wireless data transmitter and/or user data can be used for user data input and wireless data transmission.Pass through portable use, user can obtain the classification code and the follow-up code of third, the 4th or the 5th of retinal fundus images, to take corresponding action, such as referral public hospital and access health information service are obtained, content can be to be customized according to his case notes for including in his/her user file.
[0108] server can be used for upload user file with the multiple portals of trustship.Portal can be either global come tissue by country, area, language.At least one or more portal can be addressable via portable use.
[0109] alternatively, the comparison of user's retinal fundus images and reference picture is analyzed, if the similar degree for being classified as the reference picture of the first kind in family retinal fundus images and reference picture is higher than the decision threshold in computation model, this user's retinal fundus images is then divided into same class, the i.e. first kind.If the similar degree for being classified as the reference picture of the second class in family retinal fundus images and reference picture is higher than the decision threshold in computation model, this user's retinal fundus images if, is divided into same class, i.e. the second class.
[0110] alternatively, the comparison of user's retinal fundus images and reference picture is analyzed, such as
The similar degree for the reference picture that fruit user retinal fundus images and reference picture neutron classification code are 2-1 is higher than the decision threshold in computation model, then the subclassification code of this user's retinal fundus images is confirmed as 2-1 (being classified as the second class, subclassification is the first kind).If the similar degree for the reference picture that family retinal fundus images and reference picture neutron classification code are 2-2 is higher than the decision threshold in computation model, then the subclassification code of this user's retinal fundus images is confirmed as 2-2 (being classified as the second class, subclassification is the second class).
[0111] retinal fundus images classification and the analysis report of concise readability can be generated in the automation process of artificial intelligence portal, it will be sent to user or medical institutions/Mu Gen section doctor from Peripheral region, to inform their classification codes and subclassification code about user's retinal fundus images
(if there is), and the information according to documented by classification code and subclassification code (if there is), whether which, which needs to seek advice from oculist, is the further inspection Check and user whether needs to seek advice from oculist immediately or whether medical institutions/oculist needs to give user's emergency medical treatment (that is: whether situation is urgent) immediately.Classification report is computed model and compares the retinal fundus images classification code/subclassification code and color mark/sign flag that is corresponding, facilitating identification that analysis obtains.It is, for example, possible to use " (-) " or the sign flag as shown in Figure 15-A to represent classification code 1, indicate that the retinal fundus images of the user are classified as the first kind.The sign flag can further select the color mark of green." (non-emergent) " can be used, " (+30% is non-emergent) " or the sign flag as shown in Figure 15-B represent classification code as 2 and subclassification code is 2-1, indicate that the retinal fundus images of the user are classified as the second class and subclassification is the first kind.The sign flag can further select red color mark." (urgent) " can be used, " (+90% urgent) " or the sign flag such as Figure 15-C shown in represent classification code 2 and subclassification code as 2-2, indicate that the retinal fundus images of the user are classified as the second class and subclassification as the second class.The sign flag can further select red color mark.Wherein 30% in sign flag, 90% can alternatively select other percentages for representing eyes of user and having a possibility that disease risks.In Figure 15 B, 1502 indicate " non-emergent ";In figure 15 c, 1504 " urgent " is indicated.
[0112] example as described above is reported as follows:
[0113] in the previous detailed description, the embodiment of the present invention is described referring to the attached drawing provided.The description of various embodiments herein is not intended to arouse or be only limitted to the specific or specific expression of the disclosure, and is merely to illustrate that the non-limiting example of the disclosure.
[0114] disclosure is for solving the problems, such as at least some above problems and associated with the prior art.It, can be in view of the disclosure although only some embodiments of the present disclosure are disclosed herein
Carrying out various change and/or modification to disclosed embodiment will be apparent those skilled in the art without departing from the scope of the present disclosure.The scope of the present disclosure and the scope of the appended claims are not limited to embodiment described herein.
Claims (1)
- Claim1. a kind of image processing method, which is characterized in that the described method includes:Initial user file is received from user terminal, the initial user file includes user data and user images;The initial user file is loaded into server, the server is stored with reference picture and computation model, the reference picture that the reference picture includes the reference picture that multiple classification codes are 1 and multiple classification codes are 2;Using the computation model, the user images are compared with the reference picture, the classification code of the user images is determined as one of 1 and 2;By the classification code deposit initial user file of user images to generate update user file;It sends and updates user file to user terminal.2. the method according to claim 1, wherein the update user file packetColor mark is included, the color mark includes and the corresponding Green Marker of classification code 1 and red-label corresponding with classification code 2.3. the method according to claim 1, wherein if the classification of user imagesCode is confirmed as 1, and the method also includes by the first follow-up code deposit initial user file to generate the update user file.4. the method according to claim 1, wherein if the classification of user imagesCode is confirmed as 2, and the method also includes by the second follow-up code deposit initial user file to generate the update user file.5. the method according to claim 1, wherein further including that will have determined that pointThe user images of category code are as reference pictures store into server.6. according to the method for claim 1, it is characterized in that, further include, before receiving initial user file from user terminal, the reference picture is loaded in the server, the computation model is constructed based on reference picture training artificial intelligence engine, and using the artificial intelligence engine.7. according to the method described in claim 6, it is characterized in that, the artificial intelligence engine includes the combination of at least one algorithm or algorithm in machine learning algorithm and deep learning algorithm.8. the method according to the description of claim 7 is characterized in that the artificial intelligence engine includes at least one of support vector machines (SVM), gradient elevator (GBM), random forest and convolutional neural networks.9. according to the method described in claim 6, it is characterized in that, further including, based on the user images and the classification code of the determination training artificial intelligence engine.10. according to the method for claim 1, it is characterized in that, wherein the user images are the retinal fundus images of user, including at least 3000*2000 pixel, with at least 45 degree of eyeground region, and the pixel resolution of at least 150dpi.11. according to the method for claim 1, it is characterized in that, wherein the user images are the retinal fundus images of user, wherein it further comprises judging that at least one of element is compared using following eye state that the user images, which are compared with the reference picture:(a) the multiple retinal vessels presented in image;(b) cup disc ratio is less than 0. 3;And (c) lack at least one of following element:(i) visible medium turbidity;() diabetic retinopathy indicator comprising at least one of the bleeding of trace sample, aneurysms and hard exudate;(iii) CLBSflCAL OBSERVATION;(iv) macular edema;(v) exudate near macula lutea;(vi) exudate on macula lutea;(vii) laser scar;(viii) cataract;(ix) glaucoma;(χ) diabetic retinopathy;With(xi) age-related macular degeneration, it includes at least one of multiple big drusens, the geographic atrophy of marking area with hypopigmentation and choroidal neovascularization, wherein age-related macular degeneration be indicate atrophic, new vessels and at least one exudative;Wherein eye state judges that at least one of element can be not as the judgement element of classification.12. according to the method described in claim 1, wherein the user file by wireless data transmitter uploads to the server.13. according to the method described in claim 1, wherein at least one country data of the trust server transmits portal and at least one world data transmission portal.14. according to the method described in claim 1, wherein user file is to upload to the server via at least one portable use.15. according to the method for claim 1, it is characterized in that, the reference picture that the reference picture that the classification code is 2 includes the reference picture that multiple subclassification codes are 2-1 and multiple subclassification codes are 2-2, the method includes the classification code is determined as to the reference picture that 2 user images and the multiple subclassification code are 2-1 and the reference picture that multiple subclassification codes are 2-2 to be compared, one of 2-1 and 2-2 are further determined as so that classification code to be determined as to the subclassification code of 2 user images, and by the subclassification code deposit initial user file of user images to generate update user file.16. according to the method for claim 15, it is characterized in that, the update user file includes word marking, and the word marking includes and corresponding " non-emergent " label of subclassification code 2-1 and " urgent " label corresponding with subclassification code 2-2.17. according to the method for claim 15, which is characterized in that if the subclassification code of user images is confirmed as 2-1, the method also includes by the 4th follow-up code deposit initial user file to generate the update user file.18. according to the method for claim 15, which is characterized in that if the subclassification code of user images is confirmed as 2-2, the method also includes by the 5th follow-up code deposit initial user file to generate the update user file.19. according to the method for claim 15, it is characterized in that, wherein the user images are the retinal fundus images of user, wherein it further comprises judging that at least one of element is compared using following eye state that the classification code, which is determined as 2 user images, which to be compared with the subclassification code for the reference picture of 2-1:(a-i) aneurysms/trace sample bleeding;The hard exudate of (a-i i) not in macula lutea; (b-i) slight fine and close cataract (macula lutea and blood vessel are visible);(c-i) the peripheral region far from macula lutea (other than 3 from macula lutea center disc diameters) drusen (hard or soft) presence;And(c-ii) pigment.0. according to the method for claim 15, it is characterized in that, wherein the user images are the retinal fundus images of user, wherein it further comprises judging that at least one of element is compared using following eye state that the classification code, which is determined as 2 user images, which to be compared with the subclassification code for the reference picture of 2-2:(a-i) more than three trace sample bleeding;(a-ii) flame bleeding;(a-iii) with the cotton-wool patches of bleeding and aneurysms;(a-iv) with the hard exudate of aneurysms and trace sample bleeding in macula lutea;(a-v) vein beading sample changes;(a-vi) 2 or more bleedings;(a-vii) intraretinal microvascular abnormality;(a-viii) venous loop;(a-ix) presence of the new blood vessel on disk or the new blood vessel in any other place;(a-x) retinal detachment;(a-xi) preretinal hemorrhage;(a-xii) vitreous hemorrhage;(a-xiii) fibroplasia;(b-i) cataract for obscuring macula lutea and blood vessel is partially or completely fine and close;(b-ii) macula lutea and blood vessel are invisible;(c-i) presence of the drusen within macula lutea (within 2 from macula lutea center disc diameters) (hard or soft);(c-ii) geographic atrophy;(c-iii) subretinal fibrous scar; (c-iv) retinal pigment epithelium detachment;(c-v) lesion (subretinal bleeding) of subretinal fibrovascular;(c-vi) choroidal neovascular membranes;(c-vii) exudative age-related macular degeneration;(d-i) cup disc ratio of any eyes is greater than or equal to 0.3;(d-ii) asymmetry of disk is greater than or equal to 0.2;(d-iii) disk bleeding;(d-iv) appearance of any fluting or edge thinning;(e-i) medium opacity degree;(e-ii) fuzzy macula lutea (its pupil that may cause contraction);(e-iii) blood vessel caused by under-exposed or overexposure and macula lutea it is invisible;And(e-iv) the insufficient and/or wrong positioning of the focus of discus nervi optici and macula lutea.21.-kind of image processing system, which is characterized in that the system comprises:Server, wherein it is stored with reference picture and computation model, the reference picture that the reference picture includes the reference picture that multiple classification codes are 1 and multiple classification codes are 2;User terminal, for generating initial user file, the initial user file includes user data and user images;The user terminal and the server communication connection, after receiving user file, server starts the computation model, the user images are compared with the reference picture, the classification code of the user images is determined as one of 1 and 2, the classification code of user images is stored in initial user file to generate update user file, and is sent to user terminal for user file is updated.22. system according to claim 21, which is characterized in that the update user file packetColor mark is included, the color mark includes and the corresponding Green Marker of classification code 1 and red-label corresponding with classification code 2.23. system according to claim 21, which is characterized in that the system also includes the artificial intelligence engine based on reference picture training, the artificial intelligence engine is for constructing the computation model.24. system according to claim 22, which is characterized in that the artificial intelligence engine includes the group of at least one algorithm or algorithm in machine learning algorithm and deep learning algorithm25. system according to claim 23, which is characterized in that the artificial intelligence engine includes at least one of support vector machines (SVM), gradient elevator (GBM), random forest and convolutional neural networks.26. system according to claim 21, which is characterized in that wherein user images are the retinal fundus images of user, and the server is stored with eye state and judges that element information, the eye state judge that element information includes:(a) the multiple retinal vessels presented in image;(b) cup disc ratio is less than 0.3;And(c) at least one of following element:(i) visible medium turbidity;() diabetic retinopathy indicator includes at least one of the bleeding of trace sample, aneurysms and hard exudate;(iii) CLBSflCAL OBSERVATION;(iv) macular edema;(v) exudate near macula lutea;(vi) exudate on macula lutea;(vii) laser scar; (iii) cataract;(ix) glaucoma;(χ) diabetic retinopathy;With(xi) age-related macular degeneration, including at least one of multiple big drusens, the geographic atrophy of marking area with hypopigmentation and choroidal neovascularization, wherein age-related macular degeneration be indicate atrophic, new vessels and at least one exudative.27. system according to claim 21, it is characterized in that, the reference picture that the reference picture that the classification code is 2 includes the reference picture that multiple subclassification codes are 2-1 and multiple subclassification codes are 2-2, the server starts the computation model, the user images and reference picture that the multiple subclassification code the is 2-1 and reference picture that multiple subclassification codes are 2-2 that the classification code is determined as 2 is compared, one of 2-1 and 2-2 are further determined as so that classification code to be determined as to the subclassification code of 2 user images, and by the subclassification code deposit initial user file of user images to generate update user file.28. the system according to claim 27, it is characterized in that, the update user file includes word marking, and the word marking includes and corresponding " non-emergent " label of subclassification code 2-1 and " urgent " label corresponding with subclassification code 2-2.29. system according to claim 27, which is characterized in that wherein user's figureRetinal fundus images as being user, wherein it further comprises judging that at least one of element is compared using following eye state that the classification code, which is determined as 2 user images, which to be compared with the subclassification code for the reference picture of 2-1:(a-i) aneurysms/trace sample bleeding;(a-ii) the hard exudate not in macula lutea;(b-i) slight fine and close cataract (macula lutea and blood vessel are visible); (c-i) the peripheral region far from macula lutea (other than 3 from macula lutea center disc diameters) drusen (hard or soft) presence;And(c-ii) pigment.30. system according to claim 27, which is characterized in that wherein user's figureRetinal fundus images as being user, wherein it further comprises judging that at least one of element is compared using following eye state that the classification code, which is determined as 2 user images, which to be compared with the subclassification code for the reference picture of 2-2:(a-i) more than three trace sample bleeding;(a-ii) flame bleeding;(a-iii) with the cotton-wool patches of bleeding and aneurysms;(a-iv) with the hard exudate of aneurysms and trace sample bleeding in macula lutea;(a-v) vein beading sample changes;(a-vi) 2 or more bleedings;(a-vii) intraretinal microvascular abnormality;(a-viii) venous loop;(a-ix) presence of the new blood vessel on disk or the new blood vessel in any other place;(a-x) retinal detachment;(a-xi) preretinal hemorrhage;(a-xii) vitreous hemorrhage;(a-xiii) fibroplasia;(b-i) cataract for obscuring macula lutea and blood vessel is partially or completely fine and close;(b-ii) macula lutea and blood vessel are invisible;(c-i) presence of the drusen within macula lutea (within 2 from macula lutea center disc diameters) (hard or soft);(c-ii) geographic atrophy;(c-iii) subretinal fibrous scar; (c-iv) retinal pigment epithelium detachment;(c-v) lesion (subretinal bleeding) of subretinal fibrovascular;(c-vi) choroidal neovascular membranes;(c-vii) exudative age-related macular degeneration;(d-i) cup disc ratio of any eyes is greater than or equal to 0.3;(d-ii) asymmetry of disk is greater than or equal to 0.2;(d-iii) disk bleeding;(d-iv) appearance of any fluting or edge thinning;(e-i) medium opacity degree;(e-ii) fuzzy macula lutea (its pupil that may cause contraction);(e-iii) blood vessel caused by under-exposed or overexposure and macula lutea it is invisible;And(e-iv) the insufficient and/or wrong positioning of the focus of discus nervi optici and macula lutea.
Applications Claiming Priority (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG10201704418Y | 2017-05-30 | ||
SG10201704418YA SG10201704418YA (en) | 2017-05-30 | 2017-05-30 | An eye screening method using a model build using an ai engine trained with specialist-graded retinal fundus images |
SG10201710012Y | 2017-12-02 | ||
SG10201710012Y | 2017-12-02 | ||
SG10201710138S | 2017-12-06 | ||
SG10201710138S | 2017-12-06 | ||
SG10201802038Q | 2018-03-12 | ||
SG10201802038Q | 2018-03-12 | ||
PCT/SG2018/050262 WO2018222136A1 (en) | 2017-05-30 | 2018-05-28 | Image processing method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109348732A true CN109348732A (en) | 2019-02-15 |
Family
ID=64456369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880001587.4A Pending CN109348732A (en) | 2017-05-30 | 2018-05-28 | Image processing method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109348732A (en) |
WO (1) | WO2018222136A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112330610A (en) * | 2020-10-21 | 2021-02-05 | 郑州诚优成电子科技有限公司 | Corneal endothelial cell counting, collecting and accurate positioning method based on microvascular position |
CN113393425A (en) * | 2021-05-19 | 2021-09-14 | 武汉大学 | Microvessel distribution symmetry quantification method for gastric mucosa staining amplification imaging |
CN113646805A (en) * | 2019-03-29 | 2021-11-12 | 人工智能技术公司 | Image-based detection of ophthalmic and systemic diseases |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919035A (en) * | 2019-01-31 | 2019-06-21 | 平安科技(深圳)有限公司 | Improve method, apparatus, computer equipment and storage medium that attendance is identified by |
CN117877692B (en) * | 2024-01-02 | 2024-08-02 | 珠海全一科技有限公司 | Personalized difference analysis method for retinopathy |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020095257A1 (en) * | 2000-03-27 | 2002-07-18 | Rosen Richard B. | Method and system for detection by raman measurements of bimolecular markers in the vitreous humor |
US20150110370A1 (en) * | 2013-10-22 | 2015-04-23 | Eyenuk, Inc. | Systems and methods for enhancement of retinal images |
CN104915675A (en) * | 2014-03-14 | 2015-09-16 | 欧姆龙株式会社 | Image processing device, image processing method, and image processing program |
CN105513077A (en) * | 2015-12-11 | 2016-04-20 | 北京大恒图像视觉有限公司 | System for screening diabetic retinopathy |
CN106295163A (en) * | 2016-08-02 | 2017-01-04 | 上海交迅智能科技有限公司 | The method of the disease of collaborative diagnosis in many ways based on intelligent terminal |
CN106530295A (en) * | 2016-11-07 | 2017-03-22 | 首都医科大学 | Fundus image classification method and device of retinopathy |
CN106682389A (en) * | 2016-11-18 | 2017-05-17 | 武汉大学 | Health management system for monitoring ocular lesions resulting from high blood pressure |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4154156B2 (en) * | 2002-02-08 | 2008-09-24 | ソニーマニュファクチュアリングシステムズ株式会社 | Defect classification inspection system |
JP3944439B2 (en) * | 2002-09-26 | 2007-07-11 | 株式会社日立ハイテクノロジーズ | Inspection method and inspection apparatus using electron beam |
US9916538B2 (en) * | 2012-09-15 | 2018-03-13 | Z Advanced Computing, Inc. | Method and system for feature detection |
-
2018
- 2018-05-28 CN CN201880001587.4A patent/CN109348732A/en active Pending
- 2018-05-28 WO PCT/SG2018/050262 patent/WO2018222136A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020095257A1 (en) * | 2000-03-27 | 2002-07-18 | Rosen Richard B. | Method and system for detection by raman measurements of bimolecular markers in the vitreous humor |
US20150110370A1 (en) * | 2013-10-22 | 2015-04-23 | Eyenuk, Inc. | Systems and methods for enhancement of retinal images |
CN104915675A (en) * | 2014-03-14 | 2015-09-16 | 欧姆龙株式会社 | Image processing device, image processing method, and image processing program |
CN105513077A (en) * | 2015-12-11 | 2016-04-20 | 北京大恒图像视觉有限公司 | System for screening diabetic retinopathy |
CN106295163A (en) * | 2016-08-02 | 2017-01-04 | 上海交迅智能科技有限公司 | The method of the disease of collaborative diagnosis in many ways based on intelligent terminal |
CN106530295A (en) * | 2016-11-07 | 2017-03-22 | 首都医科大学 | Fundus image classification method and device of retinopathy |
CN106682389A (en) * | 2016-11-18 | 2017-05-17 | 武汉大学 | Health management system for monitoring ocular lesions resulting from high blood pressure |
Non-Patent Citations (2)
Title |
---|
李占峰: ""免散瞳超广角视网膜成像系统在快速大规模糖尿病视网膜病变筛查中的应用"", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 * |
田军章: ""基于PACS的结构化报告(SR)模块的设计与实现研究"", 《中国优秀博硕士学位论文全文数据库 (博士) 信息科技辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113646805A (en) * | 2019-03-29 | 2021-11-12 | 人工智能技术公司 | Image-based detection of ophthalmic and systemic diseases |
CN112330610A (en) * | 2020-10-21 | 2021-02-05 | 郑州诚优成电子科技有限公司 | Corneal endothelial cell counting, collecting and accurate positioning method based on microvascular position |
CN112330610B (en) * | 2020-10-21 | 2024-03-29 | 郑州诚优成电子科技有限公司 | Accurate positioning method based on microvascular position cornea endothelial cell counting acquisition |
CN113393425A (en) * | 2021-05-19 | 2021-09-14 | 武汉大学 | Microvessel distribution symmetry quantification method for gastric mucosa staining amplification imaging |
CN113393425B (en) * | 2021-05-19 | 2022-04-26 | 武汉大学 | Microvessel distribution symmetry quantification method for gastric mucosa staining amplification imaging |
Also Published As
Publication number | Publication date |
---|---|
WO2018222136A1 (en) | 2018-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109348732A (en) | Image processing method and system | |
Balyen et al. | Promising artificial intelligence-machine learning-deep learning algorithms in ophthalmology | |
Gundluru et al. | Enhancement of detection of diabetic retinopathy using Harris hawks optimization with deep learning model | |
Mayro et al. | The impact of artificial intelligence in the diagnosis and management of glaucoma | |
US20220165418A1 (en) | Image-based detection of ophthalmic and systemic diseases | |
Rudnisky et al. | Visual acuity outcomes of the Boston keratoprosthesis type 1: multicenter study results | |
Wu et al. | Rapid assessment of avoidable blindness in Kunming, China | |
CN109003252A (en) | Image processing method and system | |
Dandona et al. | Design of a population-based study of visual impairment in India: the Andhra Pradesh Eye Disease Study | |
US20180296320A1 (en) | Forecasting cataract surgery effectiveness | |
AU2016265973A1 (en) | System and method for identifying a medical condition | |
CN114175095A (en) | Processing images of eyes using deep learning to predict vision | |
Lu et al. | A population-based study of visual impairment among pre-school children in Beijing: the Beijing study of visual impairment in children | |
US11062444B2 (en) | Artificial intelligence cataract analysis system | |
Zhang et al. | Applications of artificial intelligence in myopia: current and future directions | |
Ludwig et al. | Automatic identification of referral-warranted diabetic retinopathy using deep learning on mobile phone images | |
CN112580580A (en) | Pathological myopia identification method based on data enhancement and model fusion | |
WO2020126514A1 (en) | A method and device for predicting evolution over time of a vision-related parameter | |
CN104657722B (en) | Eye parameter detection equipment | |
Sia et al. | Prevalence of and risk factors for primary open-angle glaucoma in central Sri Lanka: the Kandy eye study | |
Lo et al. | Data Homogeneity Effect in Deep Learning‐Based Prediction of Type 1 Diabetic Retinopathy | |
Zhang et al. | Diagnostic accuracy of rapid assessment of avoidable blindness: a population-based assessment | |
Sinha et al. | Prospects of artificial intelligence in ophthalmic practice | |
Smits et al. | Machine learning in the detection of the glaucomatous disc and visual field | |
Holland et al. | Deep Learning–Based Clustering of OCT Images for Biomarker Discovery in Age-Related Macular Degeneration (PINNACLE Study Report 4) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190215 |
|
WD01 | Invention patent application deemed withdrawn after publication |