A persistent issue in deep learning (DL) is the inability of models to function in a domain in which they were not trained. For example, a model trained to segment an organ in MRI scans often dramatically fails when tested in the domain of computed tomography (CT) scans. Since manual segmentation is extremely timeconsuming, it is often not feasible to acquire an annotated dataset in the target domain. Domain adaptation allows transfer of knowledge about a labelled source domain into a target domain. In this work, we attempt to address the differences in model performance when segmenting from intravenous contrast (IVC) enhanced or from non-contrast (NC) CT scans. Most of the publicly available, large-scale, annotated CT datasets are IVCenhanced. However, physicians frequently use NC scans in clinical practice. This necessitates methods capable of reliably functioning across both domains. We propose a novel DL framework that can segment the pancreas from non-contrast CT scans through training with the help of IVC-enhanced CT scans. Our method first utilizes a CycleGAN to create synthetic NC (s-NC) variants from IVC scans. Subsequently, we introduce a multilevel 3D UNet architecture to perform pancreas segmentation. The proposed method significantly outperforms the baseline. Experimental results show 6.2% percent improvement compared to the baseline model in terms of the Dice coefficient. To our knowledge, this method is the first of its kind in pancreas segmentation from NC CTs. © 2020 SPIE.