Nothing Special   »   [go: up one dir, main page]

CN104618710B - Dysopia correction system based on enhanced light field display - Google Patents

Dysopia correction system based on enhanced light field display Download PDF

Info

Publication number
CN104618710B
CN104618710B CN201510007220.1A CN201510007220A CN104618710B CN 104618710 B CN104618710 B CN 104618710B CN 201510007220 A CN201510007220 A CN 201510007220A CN 104618710 B CN104618710 B CN 104618710B
Authority
CN
China
Prior art keywords
image
light field
module
visual
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510007220.1A
Other languages
Chinese (zh)
Other versions
CN104618710A (en
Inventor
左旺孟
吕德生
张宏志
吴圣深
邓红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510007220.1A priority Critical patent/CN104618710B/en
Publication of CN104618710A publication Critical patent/CN104618710A/en
Application granted granted Critical
Publication of CN104618710B publication Critical patent/CN104618710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a correction scheme based on enhanced light field display to patients with visual defects such as shortsightedness, farsightedness and early-stage glaucoma; and the correction scheme comprises a visual and depth information acquisition module, a dysopia modeling module and an enhanced light field display module. The visual and depth information acquisition module comprises two manners: with respect to the situation of a two-dimensional display, digital images are used as the input and the depth information is acquired; with respect to the real scenes, a binocular three-dimensional visual manner is adopted. The dysopia modeling module comprises a dysopia parameter setting module, an image degradation model and a light field calculating and processing module. The enhanced light field display module comprises two manners: with respect to the situation of the two-dimensional display, an internally integrated LCD (Liquid Crystal Display) visual display of the system is adopted; and with respect to the real scenes, the near-vision light field display module is adopted.

Description

Vision defect correction system based on enhanced light field display
Technical Field
The invention relates to a visual defect correction method, a device and a system, which can provide a correction scheme based on enhanced light field display for typical visual defects such as myopia, hyperopia, early glaucoma and the like.
Background
Vision is the most important source of information for the human environment. Refractive errors include myopia, hyperopia, presbyopia, astigmatism, and higher order aberrational refractive disorders, which are among the most common vision disorder problems considered to be one of the leading causes of global vision loss. The glasses are related to people of all ages, and according to data survey of WHO2005, about 20 hundred million people need to wear the glasses all over the world. Given the extended human life and the increased task of near vision, the number of people who are expected to wear glasses to correct vision will continue to increase in the future. In addition to ametropia, according to the WHO epidemiological survey and demographic calculation in 2002, more than one hundred and sixty million people have visual disorders globally, and the glaucoma line is the first irreversible blindness-causing eye disease.
Currently, the vision correction method for ametropia mainly adopts wearing optical glasses and surgery to improve the visual acuity of a patient, however, the two methods have certain limitations: the glasses can only correct low-order aberrations such as myopia and hyperopia, and are not suitable for vision compensation of high-order aberrations; optical correction has the possibility of producing certain stimulation to eyes, and is difficult to accept by all users; the degree of improvement of the visual acuity by corneal laser surgery or intraocular lens implantation is very limited, and the invasive mode of the corneal laser surgery or intraocular lens implantation has the risk of introducing surgical complications, so that the cost performance for solving the problems is low compared with the curative effect; the visual defect problems such as high-order aberration, early glaucoma visual acuity reduction, peripheral visual field defect and the like cannot be corrected or compensated.
Although some progress has been made in the aspects of computational imaging, light field imaging and light field display at home and abroad. So far, no complete reports on the aspect of the vision defect correction method, equipment and system based on the enhanced light field display appear at home and abroad.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the problems and the defects of the current clinical visual defect correction method, the invention provides a new high-definition and high-contrast image or three-dimensional scene generation mechanism for the visual defect patient by utilizing the light field processing technology and acquiring and enhancing the light field display through visual and depth information, and provides a natural and uniform non-invasive visual defect correction method, equipment and system.
The invention aims to provide a novel correction method based on enhanced light field display for typical visual defects such as myopia, hyperopia, presbyopia, astigmatism, early glaucoma and the like.
The technical solution of the invention is as follows: a vision defect correction system based on enhanced light field display comprises a vision and depth information acquisition module (1), a vision defect modeling module (2) and an enhanced light field display module (3), wherein the vision and depth information acquisition module (1) acquires an expected image to be displayed to a user by adopting a digital image technology, and simultaneously obtains three-dimensional coordinate values of different positions in a scene as depth information to be displayed to the user by adopting a binocular vision stereo imaging method through a difference value between binocular channel images; the visual defect modeling module (2) takes the depth information of a scene and the visual defect type of a user as basic parameters, and constructs image degradation parameters of an image sensed by the user and an expected image in a fixed area and a depth range, wherein the visual defect modeling module (2) comprises a visual defect parameter setting module, an image degradation model and a light field calculation processing module; the enhanced light field display module (3) performs enhanced light field display according to the image degradation model and an expected image to be displayed to a user, and combines a light field sharing technology to finally enable the user to see the effect of the expected image after the enhanced light field display processing aiming at the characteristics of the visual defect of the user and feel real three-dimensional scene information, so that the visual effect felt by the user is close to or reaches the observation effect of normal human eyes, and the connection relation and the signal trend of each module in the system are specifically as follows: the visual and depth information acquisition module (1) provides a visual defect modeling module (2) with a parameter set which comprises angles of light rays and a plane normal direction, depths of objects, planes or scenes, dynamic distribution or contrast of image gray values and intensity information of edges in an image and is required by establishing a degradation model; the visual defect modeling module (2) provides a degradation model established aiming at point spread functions corresponding to different visual defect types for the enhanced light field display module (3); and the enhanced light field display module (3) calculates and outputs a light field display result by utilizing a regularization pre-filtering method according to the degradation model provided by the visual defect modeling module (2).
The visual and depth information acquisition module (1) can work in two modes of binocular visual stereo imaging and digital image combined physical ranging, in the first mode, when a user watches a real three-dimensional scene, the visual and depth information acquisition module (1) adopts a binocular visual stereo imaging mode, a binocular camera is used for acquiring expected images corresponding to a left eye and a right eye, and depth information of the scene is acquired based on a stereo matching method; in the second mode, when a user watches the image display of the flat panel display, the vision and depth information acquisition module (1) adopts a digital image, a micro laser and an infrared physical distance measurement method, an original clear digital image before processing is taken as an expected image, the distance measurement module (10) utilizes the micro laser and an infrared physical distance meter to measure the distance from an object to eyes in the central range of a visual field in real time, and the distance information acquired by the distance measurement module (10) in the second mode can be used for assisting in estimating the distance from the object or a target located at different depths to the eyes in the depth information acquired in the first mode.
The enhanced light field display module (3) can work in two modes of combining digital images with physical ranging and binocular vision stereo imaging, and in the first mode, an LCD visual display integrated in the system is adopted; in a second mode, a regularization pre-filtering method is used for determining a light field display result to ensure that a vision-defect patient can see an expected image or three-dimensional scene with high definition and high contrast, wherein the regularization pre-filtering method is specifically realized by the following steps:
wherein,defined is x such that f (x) is minimal,defining the square of the F-norm, λ being the regularization parameter, (x, y) and (u, v) being the two-dimensional coordinates of the image and convolution plane, respectively, θ being the angle of the light ray and the normal to the plane, d being the depth of the plane, L (x, y, θ, d) being the light field to be displayed, I0(x, y) is the expected image, pd(x, y, u, v) is a point spread function corresponding to the type of visual defect when the depth of an observation plane is d, Ψ (L (x, y, θ, d)) is a regularization term designed for the characteristics of the light field equipment and the light field itself, L (x, y, θ, d) ≧ 0 is a non-negative constraint, and for a finite-intensity light field, ". "is a point multiplier, and the following regularization term can be defined:
Ψ(L(x,y,θ,d))=max|L(x,y,θ,d)|,
the corresponding regularized pre-filtering model may be expressed as
Wherein h is the upper limit of the image pixel value, and L (x, y, theta, d) is less than or equal to h, which is the constraint on the upper limit of the image pixel value. The model can be solved using a sub-gradient descent or iterative pre-filtering algorithm.
The visual defect modeling module (2) comprehensively considers the type and the parameter of the visual defect and the depth information of the scene to establish an image degradation model, taking myopia as an example, the point spread function p of the degradation modeldThe mathematical model of (x, y, u, v) is
p d ( x , y , u , v ) = 4 / ( πr 2 ( x , y ) ) , f o r ( ( x - u ) 2 + ( y - v ) 2 ) / 4 ≤ r 2 ( x , y ) 0 , o t h e r w i s e ,
Wherein the radius parameter r (x, y) is expressed as:
r ( x , y ) = | d 0 ( d , x , y ) - d ( x , y ) | d 0 ( d , x , y ) a ,
in the formula d0(d, x, y) is the expected imaging distance, a is the retinaRadius, d (x, y) is the scene or image depth, (x, y) and (u, v) are the two-dimensional coordinates of the image and convolution plane, respectively.
The image degradation model can be spatial-invariant or spatial-variant; for point spread functions with space domain unchanged and space domain changed, a general generalized convolution calculation model is established as follows:
I 0 ( x , y ) = p d ( x , y , u , v ) · I s ( u , v ) = Σ i = 1 C f i ( g i ( I s ( x , y ) ) ⊗ k i ) ,
wherein k isiIt is indicated that the i-th basis filter,fiand giFor a point-by-point transformation function, the definition of which differs according to the type of visual defect, C represents the number of basis filters, "·" represents the corresponding generalized linear operator of the image degradation model, andrepresenting a convolution operation, (x, y) being the two-dimensional coordinates of the image plane, pd(x, y, u, v) is the point spread function of the regression model, Is(x, y) is an input image, I0(x, y) is the expected image, ki、fiAnd giCan be derived from an image degradation model; if C is equal to 1 in the generalized convolution-capable model, the generalized convolution-capable model is converted into a standard space-invariant degradation model, and thus the generalized convolution-capable model can provide a general solution for the degradation of the image with space invariant and space variant.
The working principle of the method is that a vision and depth information acquisition module is used for acquiring expected images and depth information, a vision defect modeling module is used for building a relation model between an image degradation model and vision defect parameters and scene depth information, a generalized convolution calculation model is built, and then a source image or a source light field is calculated based on a regularization pre-filtering model and displayed by a liquid crystal display or a light field display.
The invention solves the problem of inconsistent binocular vision defects by using a light field and screen sharing technology. For the condition that the binocular has different visual defects, in the three modules included in the invention, the visual and depth information acquisition module (1) is not changed, and the other two modules need to be adjusted: firstly, for a visual defect module (2), a visual defect parameter setting module, an image degradation model and a light field calculation processing module are respectively constructed for binocular requirements; secondly, the enhanced light field display module (3) performs enhanced square display according to different degradation models obtained through binocular and expected images, and establishes real three-dimensional scene information by combining a light field sharing technology, so that the visual effect felt by a user is close to or reaches the observation effect of normal human eyes.
Meanwhile, the generalized convolution and regularization pre-filtering method can be used for solving the correction problems of airspace change and composite visual defects, the defects of the traditional optical glasses correction method can be effectively overcome, and a natural and unified non-invasive visual defect correction method, equipment and system are provided.
Compared with the prior art, the invention has the advantages that: the advantages of the present invention compared to conventional solutions based on optical glasses are reflected in two aspects: firstly, the vision correction method for ametropia mainly adopts wearing optical glasses and operation to improve the visual acuity of a patient at present, however, the two methods have certain limitations, and the wearing of the optical glasses cannot solve the composite vision defect; secondly, the method can effectively solve the problem of high-order aberration vision defects of airspace changes such as early glaucoma and the like, solves the defect that the wearing of optical glasses cannot be used for vision compensation of high-order aberration, is superior to corneal laser surgery or intraocular lens implantation when facing the vision defects such as early glaucoma visual acuity reduction and peripheral visual field defects and the like, and can not realize correction or compensation for the vision defects; thirdly, compared with optical correction, the method does not stimulate eyes and is easy to be accepted by all users; and different from corneal laser surgery or intraocular lens implantation, the method has the risk of introducing surgical complications and is non-invasive;
drawings
FIG. 1 is a diagram illustrating the relationship between modules according to the present invention.
Fig. 2 is a schematic diagram of the structure of the smart glasses corresponding to the enhanced light field display module (3) of the present invention including a single-layer or multi-layer LCD. Fig. 2a is an overall schematic view of the smart glasses, and fig. 2b is an exploded schematic view of a module of the smart glasses system.
Fig. 3 is a schematic structural diagram of an intelligent helmet corresponding to the enhanced light field display module (3) of the present invention when the enhanced light field display module includes a helmet-type near-eye light field display device. Fig. 3a is an overall schematic diagram of the intelligent helmet, and fig. 3b is an exploded schematic diagram of the intelligent helmet system module. Fig. 3c is a rear view of an exploded view of the smart helmet system module.
Fig. 4 is a module relationship diagram of a first embodiment of the smart glasses according to the present invention when the enhanced light field display module (3) includes a single-layer or multi-layer LCD.
Fig. 5 is a module relationship diagram of a second embodiment of the intelligent helmet according to the present invention when the enhanced light field display module (3) includes a helmet-type near-eye light field display device.
Detailed Description
Take early glaucoma as an example for illustration. With reference to fig. 1, the visual defect correction system based on enhanced light field display comprises three main modules, namely a visual and depth information acquisition module (1), a visual defect modeling module (2) and an enhanced light field display module (3), wherein the visual and depth information acquisition module (1) acquires an expected image to be displayed to a user by adopting a digital image technology, and simultaneously obtains three-dimensional coordinate values of different positions in a scene as depth information to be displayed to the user by adopting a binocular visual stereo imaging method and a difference value between binocular channel images; the visual defect modeling module (2) takes the depth information of a scene and the visual defect type of a user as basic parameters, and constructs image degradation parameters of an image sensed by the user and an expected image in a fixed area and a depth range, wherein the visual defect modeling module (2) comprises a visual defect parameter setting module, an image degradation model and a light field calculation processing module; the enhanced light field display module (3) performs enhanced light field display according to the image degradation model and an expected image to be displayed to a user, and combines a light field sharing technology to finally enable the user to see the effect of the expected image after the enhanced light field display processing aiming at the characteristics of the visual defect of the user and feel real three-dimensional scene information, so that the visual effect felt by the user is close to or reaches the observation effect of normal human eyes, and the connection relation and the signal trend of each module in the system are specifically as follows: the visual and depth information acquisition module (1) provides a visual defect modeling module (2) with a parameter set which comprises angles of light rays and a plane normal direction, depths of objects, planes or scenes, dynamic distribution or contrast of image gray values and intensity information of edges in an image and is required by establishing a degradation model; the visual defect modeling module (2) provides a degradation model established aiming at point spread functions corresponding to different visual defect types for the enhanced light field display module (3); and the enhanced light field display module (3) calculates and outputs a light field display result by utilizing a regularization pre-filtering method according to the degradation model provided by the visual defect modeling module (2).
The visual and depth information acquisition module (1) can work in two modes of binocular visual stereo imaging and digital image combined physical ranging, in the first mode, when a user watches a real three-dimensional scene, the visual and depth information acquisition module (1) adopts a binocular visual stereo imaging mode, a binocular camera is used for acquiring expected images corresponding to a left eye and a right eye, and depth information of the scene is acquired based on a stereo matching method; in the second mode, when a user watches the image display of the flat panel display, the vision and depth information acquisition module (1) adopts a digital image, a micro laser and an infrared physical distance measurement method, an original clear digital image before processing is taken as an expected image, the distance measurement module (10) utilizes the micro laser and an infrared physical distance meter to measure the distance from an object to eyes in the central range of a visual field in real time, and the distance information acquired by the distance measurement module (10) in the second mode can be used for assisting in estimating the distance from the object or a target located at different depths to the eyes in the depth information acquired in the first mode.
The enhanced light field display module (3) can work in two modes of combining digital images with physical ranging and binocular vision stereo imaging, and in the first mode, an LCD visual display integrated in the system is adopted; in a second mode, a regularization pre-filtering method is used for determining a light field display result to ensure that a vision-defect patient can see an expected image or three-dimensional scene with high definition and high contrast, wherein the regularization pre-filtering method is specifically realized by the following steps:
wherein,defined is x such that f (x) is minimal,defining the square of the F-norm, λ being the regularization parameter, (x, y) and (u, v) being the two-dimensional coordinates of the image and convolution plane, respectively, θ being the angle of the light ray and the normal to the plane, d being the depth of the plane, L (x, y, θ, d) being the light field to be displayed, I0(x, y) is the expected image, pd(x, y, u, v) is a point spread function corresponding to the type of visual defect when the depth of the observation plane is d, Ψ (L (x, y, θ, d)) is a regularization term designed for the characteristics of the light field equipment and the light field itself, and L (x, y, θ, d) ≧ 0 is a non-negative constraint, for the limited intensity light field. For the point multiplier, the following regularization term can be defined:
Ψ(L(x,y,θ,d))=max|L(x,y,θ,d)|,
the corresponding regularized pre-filtering model may be expressed as
Wherein h is the upper limit of the image pixel value, and L (x, y, theta, d) is less than or equal to h, which is the constraint on the upper limit of the image pixel value. The model can be solved using a sub-gradient descent or iterative pre-filtering algorithm.
The visual defect modeling module (2) comprehensively considers visual defectsEstablishing an image degradation model by using the type, the parameter and the depth information of a scene, taking myopia as an example, a point spread function p of the degradation modeldThe mathematical model of (x, y, u, v) is:
wherein the radius parameter r (x, y) is expressed as:
r ( x , y ) = | d 0 ( d , x , y ) - d ( x , y ) | d 0 ( d , x , y ) a ,
in the formula d0(d, x, y) is the expected imaging distance, a is the retinal radius, d (x, y) is the scene or image depth, and (x, y) and (u, v) are the two-dimensional coordinates of the image and convolution plane, respectively.
The image degradation model can be spatial-invariant or spatial-variant; for point spread functions with space domain unchanged and space domain changed, a general generalized convolution calculation model is established as follows:
I 0 ( x , y ) = p d ( x , y , u , v ) · I s ( u , v ) = Σ i = 1 C f i ( g i ( I s ( x , y ) ) ⊗ k i ) ,
wherein k isiDenotes the ith base filter, fiAnd giFor a point-by-point transformation function, the definition of which differs according to the type of visual defect, C represents the number of basis filters, "·" represents the corresponding generalized linear operator of the image degradation model, andrepresenting a convolution operation, (x, y) being image flatTwo-dimensional coordinates of a surface, pd(x, y, u, v) is the point spread function of the regression model, Is(x, y) is an input image, I0(x, y) is the expected image, ki、fiAnd giCan be derived from an image degradation model; if C is equal to 1 in the generalized convolution-capable model, the generalized convolution-capable model is converted into a standard space-invariant degradation model, and thus the generalized convolution-capable model can provide a general solution for the degradation of the image with space invariant and space variant.
The invention comprises two specific embodiments: first, when the enhanced light field display module (3) comprises a single or multi-layer LCD, the hardware representation of the system is in the form of smart glasses; second, when the enhanced light field display module (3) comprises a head-mounted near-eye light field display device, then the hardware representation of the system is in the form of a smart helmet. Two embodiments are illustrated below:
the first embodiment is as follows: with reference to fig. 2a), when a user views an image display device such as a flat panel display (e.g., a television, a computer display, an advertisement screen, etc.), the display device is a single-layer or multi-layer LCD, and the hardware representation of the system is in the form of smart glasses. FIG. 2b) is an exploded view of the system module of the present invention when the display device is a single-layer or multi-layer LCD, comprising: the system comprises a visual defect parameter setting module (4), a degradation model and light field calculation processing module (5), an LCD visual display (6) integrated in the system, a distance measuring module (10), a power supply (11), a spectacle frame (12), Bluetooth earphones (13) and a wireless network module (14).
In connection with fig. 2b), the frame (12) provides integral assembly positioning and support for all modules of the system; an LCD visual display (6) integrated in the system is arranged on the position of the spectacle frame corresponding to the lenses of the traditional spectacles; the distance measurement module (10) is positioned in front of an LCD visual display (6) integrated in the system and is respectively connected with the wireless network module (14), the visual defect parameter setting module (4), the degradation model and the light field calculation processing module (5) on two sides of the spectacle frame (12); the visual defect parameter setting module (4) is directly connected with the degradation model and the light field calculation processing module (5) and is packaged in the same shell; the Bluetooth headset (13) and the power supply (11) are respectively positioned at the ear-hang position of the spectacle frame. The ranging module (10), the LCD visual display (6) integrated in the system, the visual defect parameter setting module (4), the degradation model and light field calculation processing module (5) and the Bluetooth headset (13) can transmit information through the wireless network module (14).
Referring to fig. 4, when the display device is a single-layer or multi-layer LCD, the visual and depth information acquiring module (1) is a digital image acquiring and ranging module (10); the visual defect modeling module (2) comprises a visual defect parameter setting module (4), a degradation model and a light field calculation processing module (5); the enhanced light field display module (3) is a wireless network module (14) and an LCD visual display (6) integrated in the system, and the invention is embodied as a specific embodiment of intelligent glasses, which comprises two stages of parameter setting and use. In the parameter setting stage, the vision defect parameter setting module (4) allows the user to preset vision defect parameters according to the digital image obtained by the vision examination result. In the using stage, the system utilizes the wireless network module (14) to transmit the expected image sequence displayed on the external display equipment (such as a television, a computer display screen, an advertisement screen and the like) which is supposed to be watched by the user through the system to the degradation model and light field calculation processing module (5) in real time, and simultaneously starts the distance measuring module (10) to measure the physical distance (depth information) between the external display equipment (such as a television, a computer display screen, an advertisement screen and the like) which is watched by the user through the system and the eyes of the user in real time, and transmits the depth information to the degradation model and light field calculation processing module (5). And then, generating an image degradation model by the degradation model and light field calculation processing module (5) according to the type, parameters and depth information of the visual defect, solving the regularized pre-filtering model by using a sub-gradient or iterative pre-filtering method, and finally sending the processing result to a near-eye LCD visual display device (6) integrated in the system in real time by using a wireless network module (14) again for displaying for a user to watch.
The second embodiment is as follows: in connection with fig. 3a), when the display device is a head-mounted near-eye field display device, the hardware representation of the system is in the form of a head-mounted smart device. Fig. 3b) is an exploded schematic view of a system module when the display device of the present invention is a head-mounted near-eye field display device, including: the visual defect monitoring system comprises a visual defect parameter setting module (4), a degradation model and light field calculation processing module (5), a binocular image acquisition module (7), a near vision field display module (8), a binocular vision reconstruction module (9), a power supply (11), a spectacle frame (12), a Bluetooth headset (13) and a wireless network module (14).
In connection with fig. 3b), the frame (12) provides integral assembly positioning and support for all modules of the system; the near-eye field display module (8) is arranged on the spectacle frame at a position corresponding to the lenses of the traditional spectacles; the binocular image acquisition module (7) is positioned in front of a near-eye light field display module (8) integrated in the system and is respectively connected with the wireless network module (14), the visual defect parameter setting module (4), the degradation model and the light field calculation processing module (5) on two sides of the spectacle frame (12); the visual defect parameter setting module (4) is directly connected with the degradation model and the light field calculation processing module (5) and is packaged in the same shell; the Bluetooth headset (13) and the power supply (11) are respectively positioned at the ear-hang position of the spectacle frame. The near-eye light field display module (8), the binocular image acquisition module (7), the LCD visual display (6) integrated in the system, the visual defect parameter setting module (4), the degradation model and light field calculation processing module (5) and the Bluetooth headset (13) can transmit information through the wireless network module (14).
With reference to fig. 5, when the display device is a helmet-type near-eye field display device, the vision and depth information acquisition module (1) is a binocular image acquisition module (7) and a binocular vision reconstruction module (9); the visual defect modeling module (2) comprises a visual defect parameter setting module (4), a degradation model and a light field calculation processing module (5); the enhanced light field display module (3) is a wireless network module (14) and a near-eye light field display module (8), and the invention is embodied in two stages of parameter setting and using in an intelligent helmet type specific implementation mode, which are specifically as follows:
in the parameter setting stage, the vision defect parameter setting module (4) allows a user to preset vision defect parameters according to vision inspection results.
In the using stage, the system firstly utilizes a binocular image acquisition module (7) to acquire an expected image sequence in real time, and simultaneously starts a binocular vision reconstruction module (9) to measure the distance and the depth information between different object objects and the eyes of an observer in a three-dimensional scene.
Then, an image degradation model is generated by a degradation model and light field calculation processing module (5) according to the type, parameters and depth information of the visual defect. Point spread function p of the degradation modeldThe mathematical model of (x, y, u, v) is
p d ( x , y , u , v ) = 4 / ( πr 2 ( x , y ) ) , f o r ( ( x - u ) 2 + ( y - v ) 2 ) / 4 ≤ r 2 ( x , y ) 0 , o t h e r w i s e ,
Where (x, y) and (u, v) are the two-dimensional coordinates of the image and convolution plane, respectively. The radius parameter r (x, y) is of the form:
r ( x , y ) = | d 0 ( d , x , y ) - d ( x , y ) | d 0 ( d , x , y ) a ,
in the formula d0(d, x, y) is the expected imaging distance, a is the retinal radius, d (x, y) is the scene or image depth, and (x, y) and (u, v) are the two-dimensional coordinates of the image and convolution plane, respectively.
The method is also used for establishing corresponding image degradation models for hyperopia, glaucoma, cataract and other types of visual defects. According to the point spread function, a user-perceived degraded image of early glaucoma is modeled as
I 0 ( x , y ) = ∫ ∫ u , v I s ( u , v ) p d ( x , y , u , v ) d u d v ,
Wherein Is(x, y) is an input image, I0(x, y) is the desired image.
The image degradation model can be unchanged in spatial domain or changed in spatial domain; for point spread functions with space domain unchanged and space domain changed, a general generalized convolution calculation model is established as
Wherein k isiDenotes the ith base filter, fiAnd giFor point-by-point transformation function, C represents the number of basis filters, "· represents the generalized linear operator corresponding to the image degradation model, (x, y) is the two-dimensional coordinates of the image plane, pd(x, y, u, v) is the point spread function of the regression model, Is(x, y) is an input image, I0(x, y) is the expected image, ki、fiAnd giCan be derived from an image degradation model; c is 1 sometimes under the condition of space-domain invariant degradation, and the generalized additive convolution model is converted into a standard space-domain invariant degradation model, so that the generalized additive convolution model can provide a universal solution for the image degradation of space-domain invariant and space-domain variant. Next, the regularized pre-filtering model is solved, i.e., solved, using a sub-gradient or iterative pre-filtering method
Where λ is the regularization parameter, L (x, y, θ, d) is the light field to be displayed to the user, (x, y) is the plane coordinates, θ is the angle of the light ray and the plane normal, d is the depth of the plane, Ψ (L (x, y, θ, d)) is the regularization term for the light field. Ψ (L (x, y, θ, d)) is a regularization term designed for the characteristics of the light field device and the light field itself, L (x, y, θ, d) ≧ 0 is a non-negative constraint, and for finite-intensity light fields, the regularization term can be defined as follows:
Ψ(L(x,y,θ,d))=max|L(x,y,θ,d)|,
the corresponding regularized pre-filtering model may ultimately be expressed as
The model can be solved by using a sub-gradient descent or iterative pre-filtering algorithm; when d is unchanged and theta is isotropic, the model can be directly applied to a single-layer flat display; when θ is isotropic and d is a fixed set of parameters, the above model can be directly applied to a multi-layer liquid crystal display.

Claims (5)

1. The utility model provides a visual defect correction system based on reinforcing light field display, it includes vision and depth information acquisition module (1), visual defect modeling module (2) and three main modules of reinforcing light field display module (3), its characterized in that: the visual and depth information acquisition module (1) acquires an expected image to be displayed to a user by adopting a digital image technology, and simultaneously acquires three-dimensional coordinate values of different positions in a scene as depth information to be displayed to the user by adopting a binocular visual stereo imaging method through a difference value between images of binocular channels; the visual defect modeling module (2) takes the depth information of a scene and the visual defect type of a user as basic parameters, and constructs image degradation parameters of an image sensed by the user and an expected image in a fixed area and a depth range, wherein the visual defect modeling module (2) comprises a visual defect parameter setting module, an image degradation model and a light field calculation processing module; the enhanced light field display module (3) performs enhanced light field display according to the image degradation model and an expected image to be displayed to a user, and combines a light field sharing technology to finally enable the user to see the effect of the expected image after the enhanced light field display processing aiming at the characteristics of the visual defect of the user and feel real three-dimensional scene information, so that the visual effect felt by the user is close to or reaches the observation effect of normal human eyes, and the connection relation and the signal trend of each module in the system are specifically as follows: the visual and depth information acquisition module (1) provides a visual defect modeling module (2) with a parameter set which comprises angles of light rays and a plane normal direction, depths of objects, planes or scenes, dynamic distribution or contrast of image gray values and intensity information of edges in an image and is required by establishing a degradation model; the visual defect modeling module (2) provides a degradation model established aiming at point spread functions corresponding to different visual defect types for the enhanced light field display module (3); and the enhanced light field display module (3) calculates and outputs a light field display result by utilizing a regularization pre-filtering method according to the degradation model provided by the visual defect modeling module (2).
2. The system of claim 1, wherein: the vision and depth information acquisition module (1) can work in two modes of binocular vision stereo imaging and digital image combined physical ranging; in the first mode, when a user watches a real three-dimensional scene, the vision and depth information acquisition module (1) adopts a binocular vision stereo imaging mode, acquires expected images corresponding to the left eye and the right eye by using a binocular camera, and acquires depth information of the scene based on a stereo matching method; in the second mode, when a user watches the image display of the flat panel display, the vision and depth information acquisition module (1) adopts a digital image, a micro laser and an infrared physical distance measurement method, an original clear digital image before processing is taken as an expected image, the distance measurement module (10) utilizes the micro laser and an infrared physical distance meter to measure the distance from an object to eyes in the central range of a visual field in real time, and the distance information acquired by the distance measurement module (10) in the second mode can be used for assisting in estimating the distance from the object or a target located at different depths to the eyes in the depth information acquired in the first mode.
3. The system for vision defect correction based on enhanced light field display according to claim 1, characterized in that the enhanced light field display module (3) can work in two modes of digital image combining physical ranging and binocular vision stereo imaging; in a first mode, an LCD visual display integrated within the system is employed; in a second mode, a regularization pre-filtering method is used for determining a light field display result to ensure that a vision-defect patient can see an expected image or three-dimensional scene with high definition and high contrast, wherein the regularization pre-filtering method is specifically realized by the following steps:
wherein,defined is x such that f (x) is minimal,defining the square of the F-norm, λ being the regularization parameter, (x, y) and (u, v) being the two-dimensional coordinates of the image and convolution plane, respectively, θ being the angle of the light ray and the normal to the plane, d being the depth of the plane, L (x, y, θ, d) being the light field to be displayed, I0(x, y) is the expected image, pd(x, y, u, v) is a point spread function corresponding to the type of visual defect when the depth of the observation plane is d, Ψ (L (x, y, θ, d)) is a regularization term designed for the light field device and the characteristics of the light field itself, L (x, y, θ,d) ≧ 0 is the nonnegative constraint, for finite intensity lightfield, [ o ] is a point multiplier, which can define the regularization term as follows:
Ψ(L(x,y,θ,d))=max|L(x,y,θ,d)|,
the corresponding regularized pre-filtering model may be expressed as
Wherein h is the upper limit of the image pixel value, and L (x, y, theta, d) is less than or equal to h is the constraint on the upper limit of the image pixel value, and the model can be solved by using a sub-gradient descent or iterative pre-filtering algorithm.
4. The vision defect correction system based on enhanced light field display according to claim 1, characterized in that the vision defect modeling module (2) integrates the type, parameters and depth information of the scene of the vision defect to build an image degradation model, such as myopia, the point spread function p of whichdThe mathematical model of (x, y, u, v) is
Wherein the radius parameter r (x, y) is expressed as:
in the formula d0(d, x, y) is the expected imaging distance, a is the retinal radius, d (x, y) is the scene or image depth, and (x, y) and (u, v) are the two-dimensional coordinates of the image and convolution plane, respectively.
5. The system of claim 1, wherein: the image degradation model can be spatial-invariant or spatial-variant; for point spread functions with space domain unchanged and space domain changed, a general generalized convolution calculation model is established as follows:
wherein k isiDenotes the ith base filter, fiAnd giFor a point-by-point transformation function, the definition of which differs according to the type of visual defect, C represents the number of basis filters, "·" represents the corresponding generalized linear operator of the image degradation model, andrepresenting a convolution operation, (x, y) being the two-dimensional coordinates of the image plane, pd(x, y, u, v) is the point spread function of the regression model, Is(x, y) is an input image, I0(x, y) is the expected image, ki、fiAnd giCan be derived from an image degradation model; if C is equal to 1 in the generalized convolution-capable model, the generalized convolution-capable model is converted into a standard space-invariant degradation model, and thus the generalized convolution-capable model can provide a general solution for the degradation of the image with space invariant and space variant.
CN201510007220.1A 2015-01-08 2015-01-08 Dysopia correction system based on enhanced light field display Active CN104618710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510007220.1A CN104618710B (en) 2015-01-08 2015-01-08 Dysopia correction system based on enhanced light field display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510007220.1A CN104618710B (en) 2015-01-08 2015-01-08 Dysopia correction system based on enhanced light field display

Publications (2)

Publication Number Publication Date
CN104618710A CN104618710A (en) 2015-05-13
CN104618710B true CN104618710B (en) 2017-01-18

Family

ID=53152966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510007220.1A Active CN104618710B (en) 2015-01-08 2015-01-08 Dysopia correction system based on enhanced light field display

Country Status (1)

Country Link
CN (1) CN104618710B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105208366A (en) * 2015-09-16 2015-12-30 云南师范大学 Method for enhancing stereoscopic vision of myopia patient
US10921586B2 (en) 2016-04-14 2021-02-16 Huawei Technologies Co., Ltd. Image processing method and apparatus in virtual reality device
KR102412525B1 (en) 2016-07-25 2022-06-23 매직 립, 인코포레이티드 Optical Field Processor System
CN106791792B (en) * 2016-12-16 2019-05-14 宇龙计算机通信科技(深圳)有限公司 Adjust the method and system that VR equipment shows image
US20180262758A1 (en) * 2017-03-08 2018-09-13 Ostendo Technologies, Inc. Compression Methods and Systems for Near-Eye Displays
KR20200122319A (en) 2018-01-14 2020-10-27 라이트 필드 랩 인코포레이티드 4D energy field package assembly
CN108234986B (en) * 2018-01-19 2019-03-15 姚惜珺 For treating the 3D rendering management method and management system and device of myopia or amblyopia
CN109875863B (en) * 2019-03-14 2021-07-20 江苏睿世力科技有限公司 Head-mounted VR eyesight improving system based on binocular vision and mental image training
CN110007475A (en) * 2019-04-17 2019-07-12 万维云视(上海)数码科技有限公司 Utilize the method and apparatus of virtual depth compensation eyesight
CN112329216A (en) * 2020-10-23 2021-02-05 杭州几目科技有限公司 Human eye model and design method thereof
US11841513B2 (en) 2021-05-13 2023-12-12 Coretronic Corporation Light field near-eye display device and method of light field near-eye display
CN114240944B (en) * 2022-02-25 2022-06-10 杭州安脉盛智能技术有限公司 Welding defect detection method based on point cloud information
CN116152123B (en) * 2023-04-21 2023-09-19 荣耀终端有限公司 Image processing method, electronic device, and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103026367A (en) * 2010-06-11 2013-04-03 焦点再现 Systems and methods for rendering a display to compensate for a viewer's visual impairment
CN104182979A (en) * 2014-08-22 2014-12-03 中国科学技术大学 Visual impairment simulation method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101766472B (en) * 2009-12-31 2012-03-21 中国科学院长春光学精密机械与物理研究所 Liquid crystal adaptive retinal imaging optical system for aberration correction with self-regulating visibility
JP2012009010A (en) * 2010-05-25 2012-01-12 Mitsubishi Electric Corp Image processing device, image processing method and image display device
EP2680593A1 (en) * 2012-06-26 2014-01-01 Thomson Licensing Method of adapting 3D content to an observer wearing prescription glasses
CN103236082B (en) * 2013-04-27 2015-12-02 南京邮电大学 Towards the accurate three-dimensional rebuilding method of two-dimensional video of catching static scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103026367A (en) * 2010-06-11 2013-04-03 焦点再现 Systems and methods for rendering a display to compensate for a viewer's visual impairment
CN104182979A (en) * 2014-08-22 2014-12-03 中国科学技术大学 Visual impairment simulation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
The Transient Component of Disparity Vergence maybe an Indication of Progressive Lens Acceptability;Carlos A. Castillo;《Engineering in Medicine and Biology Society, 2006. EMBS"06. 28th Annual International Conference of the IEEE》;20060903;全文 *
基于双目视觉的三维测距与定位;李浩;《华南理工大学硕士学位论文》;20121010;全文 *

Also Published As

Publication number Publication date
CN104618710A (en) 2015-05-13

Similar Documents

Publication Publication Date Title
CN104618710B (en) Dysopia correction system based on enhanced light field display
JP7078540B2 (en) Image creation device, image creation method, image creation program, spectacle lens design method and spectacle lens manufacturing method
US20180218642A1 (en) Altered Vision Via Streamed Optical Remapping
JP6854647B2 (en) How to optimize your optical lens device for your wearer
US9364142B2 (en) Simulation device, simulation system, simulation method and simulation program
CN104306102A (en) Head wearing type vision auxiliary system for patients with vision disorder
ES2836790T3 (en) Method and system for improving an ophthalmic prescription
US20220207919A1 (en) Methods, devices and systems for determining eye parameters
CN110770636B (en) Wearable image processing and control system with vision defect correction, vision enhancement and perception capabilities
CN105455774A (en) Psychophysical measurement method for controlling lower aniseikonia on basis of interocular contrast ratio
WO2018145460A1 (en) Smart user-experience device and smart helmet
WO2016143861A1 (en) Measurement system for eyeglasses-wearing parameter, measurement program, measurement method therefor, and manufacturing method for eyeglasses lens
WO2016086438A1 (en) Head-mounted multimedia terminal auxiliary viewing system for visually impaired person
US20220058999A1 (en) Automated vision care diagnostics and digitally compensated smart eyewear
JP2003177076A (en) Method and apparatus for displaying binocular view performance of spectacles lens
CN103784298A (en) Visual training appearance is corrected to individualized human eye aberration of wear-type
US10255676B2 (en) Methods and systems for simulating the effects of vision defects
Cimmino et al. A method for user-customized compensation of metamorphopsia through video see-through enabled head mounted display
US11614623B2 (en) Holographic real space refractive system
Liu et al. A holographic waveguide based eye tracker
US11256110B2 (en) System and method of utilizing computer-aided optics
EP4364642A1 (en) Computer-implemented methods and devices for determining refractive errors
CN110974147B (en) Binocular vision function detection quantification output device for binocular vision
Fischer The Influence of Eye Model Complexity and Parameter Variations on the Predictive Power of Simulated Eye-Tracking Data
EP4178445A1 (en) Holographic real space refractive system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant