Autostereoscopic 3D Display System for 3D Medical Images
<p>(<b>a</b>) Eye tracking-based light-field 3D display concept, showing the generation and modeling of directional light rays. (<b>b</b>) 3D display prototype for medical applications: 3D cardiac CT navigator.</p> "> Figure 2
<p>Autostereoscopic 3D display concept: (1) Directional light is generated by liquid crystal display (LCD) panels; (2) the 3D light field is modeled; and (3) 3D light rendering is performed according to user eye position, as determined by eye tracking algorithms.</p> "> Figure 3
<p>Parameterization of a light ray in the 3D display environment.</p> "> Figure 4
<p>A horizontal 2D slice of light field in the 3D display. (<b>a</b>) Real-world setting, (<b>b</b>) equivalent representation in the ray space. Greyscale lines represent light rays passing through slits (lens), red and blue lines represent light rays passing through the left and right eye respectively. Square boxes are intersected points.</p> "> Figure 5
<p>Illustration of the proposed eye tracking method, which includes pupil segmentation modules. The left image shows an extracted subregion (red box) including the eyes and nose of the subject. The middle image shows the 11 landmark points (green dots) used to inform the Supervised Descent Method (SDM)-based shape alignments. In the right image, the green circles around the red points on the eyes indicate the pupil segmentation modules, which increase the accuracy of the eye tracking.</p> "> Figure 6
<p>Eye center positions are refined with our proposed re-weight algorithm for eye occluded faces. From the MobileNet v2 based PFLD method (the upper row), the re-weight subnet infers the pixel confidence on the feature map from both the structure between landmarks and landmark appearance as shown in the bottom row.</p> "> Figure 7
<p>Three-dimensional cardiac CT navigator software prototype. The images show the volume rendered original CT images without any pre-processing (<b>1st row</b>), a segmented whole heart (<b>2nd row</b>), and segmented coronary arteries (<b>3rd row</b>).</p> "> Figure 8
<p>Three-dimensional cardiac CT navigator usage examples. The 3D display serves as a 3D navigator for identifying coronary lesion candidates within complex coronary artery structures. The candidates can then be examined in detail using 2D modes with 2D cardiac CT images.</p> "> Figure 9
<p>Proposed implemented prototypes of an autostereoscopic 3D display system for 3D medical images ((<b>left</b>) 31.5”, (<b>middle</b>) 18.4”, (<b>right</b>) 10.1”). In the 31.5” monitory prototype (<b>left</b>), the light field 3D subpixel rendering is processed with a GPU and the eye tracking algorithm is processed with only CPU computations in Windows PC. In the 18.4” (<b>middle</b>) and the 10.1” (<b>right</b>) tablet display prototypes, the light field 3D subpixel rendering and eye tracking are processed in the FPGA board.</p> "> Figure 10
<p>Examples of the two-view image captured at different distances: (<b>a</b>) captured images of rendering without eye positions, (<b>b</b>) captured images using the proposed 3D light field rendering method with eye positions.</p> "> Figure 11
<p>Illustration of the proposed eye tracking method with pupil segmentation modules. The green circles around the red points on the eyes indicate the pupil segmentation modules, which increase the accuracy of the eye tracking. The left side shows the left camera image from a stereo webcam, while the right side shows the image captured by the right camera. The left and right images were combined to calculate the 3D eye position via stereo image matching based on triangular interpolation.</p> "> Figure 12
<p>Stereo images from the volume rendered 3D coronary arteries from a coronary CT angiography image dataset. Left view image (<b>left</b>) and right view image (<b>right</b>).</p> "> Figure 13
<p>Unclear left- and right-view separation resulting from failed eye tracking. The resulting 3D crosstalk manifests as overlapping double images and 3D fatigue for the user.</p> "> Figure 14
<p>Example of the augmented morphological information provided by the proposed 3D display system. The 1st row shows the left and right views from the 3D display prototype. The 2nd row shows an enlarged region of interest (red box, 1st row), and clearly highlights the improvement provided by the proposed system with respect to deciphering the morphology of coronary arteries: the left image shows separate arteries, whereas the right image shows overlapping information. The user receives both images to both eyes; therefore, they experience enhanced 3D depth perception and access more morphological information compared to 2D displays.</p> "> Figure 15
<p>Examples of the proposed 3D display system being applied to other modalities; namely, abdomen CT and head CT. The inset camera images show the real-time eye tracking results for the proposed autostereoscopic 3D display.</p> ">
Abstract
:1. Introduction
2. Methods
2.1. Autostereoscopic 3D Display System
2.2. Light Field 3D Subpixel Rendering
- All light rays passing through the slit (lens) are represented as lines in ray space.
- All light rays passing through the eye pupil are also represented as lines in ray space.
- The light ray we see is the intersection of these two straight lines.
2.3. Eye Tracking
3. Results and Discussion
3.1. Experimental Results on Light Field 3D Subpixel Rendering
3.2. Experimental Results on Eye Tracking
3.3. Experimental Results on Autostereoscopy Visualization Systems for 3D Cardiac CT
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Chan, S.; Conti, F.; Salisbury, K.; Blevins, N.H. Virtual reality simulation in neurosurgery: Technologies and evolution. Neurosurgery 2013, 72, 154–164. [Google Scholar] [CrossRef] [PubMed]
- Ferroli, P.; Tringali, G.; Acerbi, F.; Schiariti, M.; Broggi, M.; Aquino, D.; Broggi, G. Advanced 3-dimensional planning in neurosurgery. Neurosurgery 2013, 72, 54–62. [Google Scholar] [CrossRef] [PubMed]
- Langdon, W.B.; Modat, M.; Petke, J.; Harman, M. Improving 3D medical image registration CUDA software with genetic programming. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, New York, NY, USA, 12 July 2014; pp. 951–958. [Google Scholar]
- Höhne, K.H. 3D Imaging in Medicine: Algorithms, Systems, Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 60. [Google Scholar]
- Urey, H.; Chellappan, K.V.; Erden, E.; Surman, P. State of the art in stereoscopic and autostereoscopic displays. Proc. IEEE 2011, 99, 540–555. [Google Scholar] [CrossRef]
- Holliman, N.S.; Dodgson, N.A.; Favalora, G.E.; Pockett, L. Three-dimensional displays: A review and applications analysis. IEEE Trans. Broadcasting 2011, 57, 362–371. [Google Scholar] [CrossRef]
- Yang, L.; Dong, H.; Alelaiwi, A.; Saddik, A.E. See in 3D: State of the art of 3D display technologies. Multimed. Tools Appl. 2016, 75, 17121–17155. [Google Scholar] [CrossRef]
- Cho, D.Y.H.; Nam, D.K. Content Visualizing Device and Method. U.S. Patent 10 573 063 B2, 25 February 2019. [Google Scholar]
- Martinez, L.A.V.G.; Orozoco, L.F.E. Head-Up Display System Using Auto-Stereoscopy 3D Transparent Electronic Display. U.S. Patent 2016 073 098, 10 March 2016. [Google Scholar]
- Montemurro, N.; Scerrati, A.; Ricciardi, L.; Trevisi, G. The Exoscope in Neurosurgery: An Overview of the Current Literature of Intraoperative Use in Brain and Spine Surgery. J. Clin. Med. 2022, 11, 223. [Google Scholar] [CrossRef] [PubMed]
- Amoo, M.; Henry, J.; Javadpour, M. Beyond magnification and illumination: Preliminary clinical experience with the 4K 3D ORBEYE™ exoscope and a literature review. Acta Neurochir. 2021, 163, 2107–2115. [Google Scholar] [CrossRef]
- Dodgson, N.A. Autostereoscopic 3D displays. Computer 2005, 38, 31–36. [Google Scholar] [CrossRef]
- Wang, J.; Zhao, X.; Li, D.; Wen, Y.; Wang, W.; Wang, B.; Xu, X.; Bai, H.; Liu, W. Autostereoscopic-Raman Spectrometry-Based Three-Dimensional Metrology System for Measurements, Tracking and Identification in a Volume. Appl. Sci. 2022, 12, 3111. [Google Scholar] [CrossRef]
- Barré, R.D.l.; Bartmann, R.; Jurk, S.; Kuhlmey, M.; Duckstein, B.; Seeboth, A.; Lötzsch, D.; Rabe, C.; Frach, P.; Bartzsch, H.; et al. Time-sequential working wavelength-selective filter for flat autostereoscopic displays. Appl. Sci. 2017, 7, 194. [Google Scholar] [CrossRef] [Green Version]
- Dodgson, N.A. Analysis of the viewing zone of multiview autostereoscopic displays. In Stereoscopic Displays and Virtual Reality Systems IX; SPIE: Bellingham, WA, USA, 2002; Volume 4660, pp. 254–265. [Google Scholar]
- Chen, F.; Qiu, C.; Liu, Z. Investigation of Autostereoscopic Displays Based on Various Display Technologies. Nanomaterials 2022, 12, 429. [Google Scholar] [CrossRef] [PubMed]
- Lee, S.; Park, J.; Heo, J.; Kang, B.; Kang, D.; Hwang, H.; Lee, J.; Choi, Y.; Choi, K.; Nam, D. Autostereoscopic 3D display using directional subpixel rendering. Opt. Express 2018, 26, 20233. [Google Scholar] [CrossRef] [PubMed]
- Lee, S.; Park, J.; Heo, J.; Kang, B.; Kang, D.; Hwang, H.; Lee, J.; Choi, Y.; Choi, K.; Nam, D. Eye tracking based glasses-free 3D display by dynamic light field rendering. In Digital Holography and Three-Dimensional Imaging; Optical Society of America: Heidleberg, Germany, 2016; p. DM3E-6. [Google Scholar]
- Park, J.; Nam, D.; Sung, G.; Kim, Y.; Park, D.; Kim, C. 61.4: Active Crosstalk Reduction on Multi-View Displays Using Eye Detection. In SID Symposium Digest of Technical Papers; Blackwell Publishing Ltd.: Oxford, UK, 2011; Volume 42, pp. 920–923. [Google Scholar]
- Sakurai, M.; Kodaira, S.; Machino, T. Computer-Readable Storage Medium Having Stored Therein Stereoscopic Display Control Program, Stereoscopic Display Control System, Stereoscopic Display Control Apparatus, and Stereoscopic Display Control Method. U.S. Patent US9050532B2, 9 June 2015. [Google Scholar]
- Suzuki, D.; Hayashi, S.; Hyodo, Y.; Oka, S.; Koito, T.; Sugiyama, H. A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance. J. Soc. Inf. Disp. 2016, 24, 657–668. [Google Scholar] [CrossRef]
- Matsumoto, T.; Kusafuka, K.; Hamagishi, G.; Takahashi, H. P-87: Glassless 3D Head up Display using Parallax Barrier with Eye Tracking Image Processing. In SID Symposium Digest of Technical Papers; The Society for Information Display: Los Angeles, CA, USA, 2018; Volume 49, pp. 1511–1514. [Google Scholar]
- Hwang, H.; Kang, D. User-friendly inter-pupillary distance calibration method using a single camera for autostereoscopic 3D displays. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 12–14 January 2018; pp. 1–3. [Google Scholar] [CrossRef]
- Lee, S.; Kang, D.; Kang, B.M.; Nam, D.K.; Park, J.; Heo, J. Method and Apparatus for Generating Three-Dimensional Image. U.S. Patent 10,419,736, 17 September 2019. [Google Scholar]
- Kang, D.; Heo, J. Content-Aware Eye Tracking for Autostereoscopic 3D Display. Sensors 2020, 20, 4787. [Google Scholar] [CrossRef] [PubMed]
- Kang, D.; Ma, L. Real-Time Eye Tracking for Bare and Sunglasses-Wearing Faces for Augmented Reality 3D Head-Up Displays. IEEE Access 2021, 9, 125508–125522. [Google Scholar] [CrossRef]
- Kang, D.; Chang, H.S. Low-Complexity Pupil Tracking for Sunglasses-Wearing Faces for Glasses-Free 3D HUDs. Appl. Sci. 2021, 11, 4366. [Google Scholar] [CrossRef]
- Hwang, H.; Chang, H.S.; Kweon, I.S. Local Deformation Calibration for Autostereoscopic 3D Display. Opt. Express 2017, 25, 10801–10814. [Google Scholar] [CrossRef] [PubMed]
- Hwang, H. Automated Calibration Method for Eye-Tracked Autostereoscopic Display. Sensors 2018, 18, 2614. [Google Scholar] [CrossRef] [Green Version]
- Hwang, H.; Chang, H.S.; Nam, D.; Kweon, I.S. 3D Display Calibration by Visual Pattern Analysis. IEEE Trans. Image Process. 2017, 26, 2090–2102. [Google Scholar] [CrossRef] [Green Version]
- Kang, D.; Lee, S.; Hwang, H.; Park, J.; Heo, J.; Kang, B.; Lee, J.-H.; Choi, Y.; Choi, K.; Nam, D. Feasibility of Eye-tracking based Glasses-free 3D Autostereoscopic Display Systems for Medical 3D Images. In Proceedings of the 9th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2016)—Volume 2: BIOIMAGING, Rome, Italy, 21–23 February 2016; pp. 134–138. [Google Scholar]
- Narita, Y.; Tsukagoshi, S.; Suzuki, M.; Miyakita, Y.; Ohno, M.; Arita, H.; Saito, Y.; Kokojima, Y.; Watanabe, N.; Moriyama, N.; et al. Usefulness of a glass-free medical three-dimensional autostereoscopic display in neurosurgery. Int. J. Comput. Assist. Radiol. Surg. 2014, 9, 905–911. [Google Scholar] [CrossRef]
- Jeong, Y.J.; Chang, H.S.; Hwang, H.; Nam, D.; Kuo, C.J. Uncalibrated multiview synthesis. Opt. Eng. 2017, 56, 043103. [Google Scholar] [CrossRef]
- Jiao, S.; Wang, X.; Zhou, M.; Li, W.; Hong, T.; Nam, D.; Lee, J.-H.; Wu, E.; Wang, H.; Kim, J.-Y. Multiple ray cluster rendering for interactive integral imaging system. Opt. Express 2013, 21, 10070–10086. [Google Scholar] [CrossRef] [PubMed]
- Fehn, C. Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV. In Stereoscopic Displays and Virtual Reality Systems XI; International Society for Optics and Photonics: Bellingham, WA, USA, 2004; Volume 5291, pp. 93–104. [Google Scholar]
- Viola, P.; Michael, J. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
- Zhang, L.; Chu, R.; Xiang, S.; Liao, S.; Li, S.Z. Face detection based on multi-block LBP representation. In Proceedings of the International Conference on Biometrics, Seoul, Korea, 27–29 August 2007; pp. 11–18. [Google Scholar]
- Xiong, X.; De la Torre, F. Supervised Descent Method and its Applications to Face Alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 532–539. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Guo, X.; Li, S.; Yu, J.; Zhang, J.; Ma, J.; Ma, L.; Liu, W.; Ling, H. PFLD: A practical facial landmark detector. arXiv 2019, arXiv:1902.10859. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Huang, K.C.; Chou, Y.H.; Lin, L.C.; Lin, H.Y.; Chen, F.H.; Liao, C.C.; Chen, Y.H.; Lee, K.; Hsu, W.H. Investigation of designated eye position and viewing zone for a two-view autostereoscopic display. Opt. Express 2014, 22, 4751–4767. [Google Scholar] [CrossRef]
- Zhu, S.; Li, C.; Change Loy, C.; Tang, X. Face alignment by coarse-to-fine shape searching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4998–5006. [Google Scholar]
- Wu, W.; Yang, S. Leveraging intra and inter-dataset variations for robust face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 150–159. [Google Scholar]
- Wu, W.; Qian, C.; Yang, S.; Wang, Q.; Cai, Y.; Zhou, Q. Look at boundary: A boundary-aware face alignment algorithm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2129–2138. [Google Scholar]
- Mishra, R.; Narayanan, M.D.K.; Umana, G.E.; Montemurro, N.; Chaurasia, B.; Deora, H. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning. Int. J. Environ. Res. Public Health 2022, 19, 1719. [Google Scholar] [CrossRef]
- Pelargos, P.E.; Nagasawa, D.T.; Lagman, C.; Tenn, S.; Demos, J.V.; Lee, S.J.; Bui, T.T.; Barnette, N.E.; Bhatt, N.S.; Ung, N.; et al. Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery. J. Clin. Neurosci. 2017, 35, 1–4. [Google Scholar] [CrossRef]
Monitor | Mobile Tablet 1 | Mobile Tablet 2 | |
---|---|---|---|
Screen Size | 31.5” | 18.4” | 10.1” |
3D Optical Module | Barrier-type BLU (3D) | Dual-layer BLU (2D/3D) | Single-layer BLU (2D/3D) |
3D Processing | Light Field Subpixel Rendering | Light Field Subpixel Rendering | Light Field Subpixel Rendering |
3D Viewing Angle | H60°, V40° | H60°, V40° | H60°, V40° |
3D Resolution | FHD | QHD | QHD |
Total Viewing Number | Continuous Parallax | Continuous Parallax | Continuous Parallax |
Display Driving HW | PC, Windows | FPGA, Androids | FPGA, Androids |
Distance (mm) | 3D Crosstalk by 3D Rendering W/O Eye Positions | 3D Crosstalk (Ours) |
---|---|---|
300 | 77.59 | 7.90 |
400 | 85.79 | 8.07 |
500 | 91.10 | 8.52 |
600 | 88.20 | 8.79 |
average | 85.67 | 8.32 |
Performance on Clean Faces | Performance on Occluded Faces | |
---|---|---|
Illumination condition | 300~400 lux | |
Distance to users from camera | 1 m | |
Detection accuracy | 99.80% | 99.80% |
Tracking precision (mm) | 1.5 mm | 6.5 mm |
Speed (fps) | 250 fps | 100 fps |
Testing DB number | 20,000 | 5000 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kang, D.; Choi, J.-H.; Hwang, H. Autostereoscopic 3D Display System for 3D Medical Images. Appl. Sci. 2022, 12, 4288. https://doi.org/10.3390/app12094288
Kang D, Choi J-H, Hwang H. Autostereoscopic 3D Display System for 3D Medical Images. Applied Sciences. 2022; 12(9):4288. https://doi.org/10.3390/app12094288
Chicago/Turabian StyleKang, Dongwoo, Jin-Ho Choi, and Hyoseok Hwang. 2022. "Autostereoscopic 3D Display System for 3D Medical Images" Applied Sciences 12, no. 9: 4288. https://doi.org/10.3390/app12094288
APA StyleKang, D., Choi, J. -H., & Hwang, H. (2022). Autostereoscopic 3D Display System for 3D Medical Images. Applied Sciences, 12(9), 4288. https://doi.org/10.3390/app12094288