In a wide range of semantic segmentation tasks, fully convolutional neural networks (F-CNNs) have been successfully leveraged to achieve the state-of-the-art performance. Architectural innovations of F-CNNs have mainly been on improving spatial encoding or network connectivity to aid gradient flow. In this paper, we aim toward an alternate direction of recalibrating the learned feature maps adaptively, boosting meaningful features while suppressing weak ones. The recalibration is achieved by simple computational blocks that can be easily integrated in F-CNNs architectures. We draw our inspiration from the recently proposed "squeeze and excitation" (SE) modules for channel recalibration for image classification. Toward this end, we introduce three variants of SE modules for segmentation: 1) squeezing spatially and exciting channel wise; 2) squeezing channel wise and exciting spatially; and 3) joint spatial and channel SE. We effectively incorporate the proposed SE blocks in three state-of-the-art F-CNNs and demonstrate a consistent improvement of segmentation accuracy on three challenging benchmark datasets. Importantly, SE blocks only lead to a minimal increase in model complexity of about 1.5%, while the Dice score increases by 4%-9% in the case of U-Net. Hence, we believe that SE blocks can be an integral part of future F-CNN architectures.