default search action
Medical Image Analysis, Volume 52
Volume 52, February 2019
- Yiyuan Zhao, Srijata Chakravorti, Robert F. Labadie, Benoit M. Dawant, Jack H. Noble:
Automatic graph-based method for localization of cochlear implant electrode arrays in clinical CT with sub-voxel accuracy. 1-12 - Johannes Lindemeyer, Ana-Maria Oros-Peusquens, N. Jon Shah:
Quality-based UnwRap of SUbdivided Large Arrays (URSULA) for high-resolution MRI data. 13-23 - Hassan Al Hajj, Mathieu Lamard, Pierre-Henri Conze, Soumali Roychowdhury, Xiaowei Hu, Gabija Marsalkaite, Odysseas Zisimopoulos, Muneer Ahmad Dedmari, Fenqiang Zhao, Jonas Prellberg, Manish Sahu, Adrian Galdran, Teresa Araújo, Duc My Vo, Chandan Panda, Navdeep Dahiya, Satoshi Kondo, Zhengbing Bian, Gwenolé Quellec:
CATARACTS: Challenge on automatic tool annotation for cataRACT surgery. 24-41 - Vimal Chandran, Ghislain Maquer, Thomas Gerig, Philippe Zysset, Mauricio Reyes:
Supervised learning for bone shape and cortical thickness estimation from CT images for finite element analysis. 42-55 - Timo Roine, Ben Jeurissen, Daniele Perrone, Jan Aelterman, Wilfried Philips, Jan Sijbers, Alexander Leemans:
Reproducibility and intercorrelation of graph theoretical measures in structural brain connectivity networks. 56-67 - Tanja Lossau, Hannes Nickisch, Tobias Wissel, Rolf Bippus, Holger Schmitt, Michael M. Morlock, Michael Grass:
Motion artifact recognition and quantification in coronary CT angiography using convolutional neural networks. 68-79 - Yang Li, Jingyu Liu, Xinqiang Gao, Biao Jie, Minjeong Kim, Pew-Thian Yap, Chong-Yaw Wee, Dinggang Shen:
Multimodal hyper-connectivity of functional networks using functionally-weighted LASSO for MCI classification. 80-96 - Michela Antonelli, M. Jorge Cardoso, Edward W. Johnston, Mrishta Brizmohun Appayya, Benoît Presles, Marc Modat, Shonit Punwani, Sébastien Ourselin:
GAS: A genetic atlas selection strategy in multi-atlas segmentation framework. 97-108 - Felix Ambellan, Alexander Tack, Moritz Ehlke, Stefan Zachow:
Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the Osteoarthritis Initiative. 109-118 - Amitay Nachmani, Roey Schurr, Leo Joskowicz, Aviv A. Mezer:
The effect of motion correction interpolation on quantitative T1 mapping with MRI. 119-127 - Bob D. de Vos, Floris F. Berendsen, Max A. Viergever, Hessam Sokooti, Marius Staring, Ivana Isgum:
A deep learning framework for unsupervised affine and deformable image registration. 128-143 - Daniel Jimenez-Carretero, David Bermejo-Peláez, Pietro Nardelli, Patricia Fraga, Eduardo Fraile Moreno, Raúl San José Estépar, María J. Ledesma-Carbayo:
A graph-cut approach for pulmonary artery-vein segmentation in noncontrast CT images. 144-159 - Shan E Ahmed Raza, Linda Cheung, Muhammad Shaban, Simon Graham, David B. A. Epstein, Stella Pelengaris, Michael Khan, Nasir M. Rajpoot:
Micro-Net: A unified model for segmentation of various objects in microscopy images. 160-173 - Jinzheng Cai, Zizhao Zhang, Lei Cui, Yefeng Zheng, Lin Yang:
Towards cross-modal organ translation and segmentation: A cycle- and shape-consistent generative adversarial network. 174-184 - Xiaofeng Qi, Lei Zhang, Yao Chen, Yong Pi, Yi Chen, Qing Lv, Zhang Yi:
Automated diagnosis of breast ultrasonography images using deep neural networks. 185-198 - Simon Graham, Hao Chen, Jevgenij Gamper, Qi Dou, Pheng-Ann Heng, David R. J. Snead, Yee-Wah Tsang, Nasir M. Rajpoot:
MILD-Net: Minimal information loss dilated network for gland instance segmentation in colon histology images. 199-211 - Sofie Tilborghs, Tom Dresselaers, Piet Claus, Guido Claessen, Jan Bogaert, Frederik Maes, Paul Suetens:
Robust motion correction for cardiac T1 and ECV mapping using a T1 relaxation model approach. 212-227
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.