Facial caricature is the art of drawing faces in an exaggerated way to convey emotions such as humor or sarcasm. Automatic caricaturization has been explored both in the 2D and 3D domain. In this paper, we present the first study of facial mesh caricaturization techniques. In addition to a user study, we propose two novel approaches to automatically caricaturize input facial scans. They allowed to fill gaps in the literature in terms of user-control, caricature style transfer and explore, for the first time, the use of deep learning for 3D mesh caricaturization. The first approach is a gradient-based EDFM with data driven stylization. It is a combination of two deformation processes: facial curvature and proportions exaggeration. The second technique is the first GAN for unpaired face-scan-to-3D-caricature translation. We leverage existing facial and caricature datasets, along with recent domain-to-domain translation methods and 3D convolutional operators, to learn to caricaturize 3D facial scans in an unsupervised way. To evaluate and compare these two novel approaches with the state of the art, we conduct a user study with 49 participants. It highlights the subjectivity of the caricature perception and the complimentarity of the methods. Finally, we provide insights for automatically generating caricaturized 3D facial mesh.