This paper addresses the challenges in creating good quality composite 3D contents for 3DTV applications and postproduction visual-effects. We present a novel content-aware compositing technique that faithfully preserves the salient structures of cloned source and target content, and avoid major conicting stereopsis cues to maintain a pleasant 3D illusion altogether. Our approach typically learns the appearance layouts of both source and target scene elements. The system extracts object's significance prior maps using classified labels and derive geometric transforms to compensate the 3D perspective mismatches between source and target images using a novel depth image-based rendering procedure. For seamless cloning, we apply a new depth-consistent interpolant technique which utilizes the classified likelihood confidences in weighting the salient or low-significant regions and re-estimating the plausible depth values of the cloned region in accordance with target 3D structure. Further, we adopt a novel content-preserving local warping scheme to reduce the apparent distortions in object shape, size and perspective. Finally, we propose a content-aware mean value cloning technique that seamlessly merges the warped cloned patches with the geometric-appearance context of new background and homogenize vague boundaries with the aid of an object salient map to remove the smudging effects. The overall process is formulated as an energy minimization problem and optimally regularized for large warps, vertical disparities, and stereo baseline changes. Plausible results are demonstrated to show the effectiveness of our approach. Copyright 2014 ACM.