Traditional authentication mechanisms use passwords, PINs and biometrics, but these only authenticate at the point of entry. Continuous authentication schemes instead allow systems to verify identity and mitigate unauthorised access continuously. However, recent developments in generative modelling can significantly threaten continuous authentication systems, allowing attackers to craft adversarial examples to gain unauthorised access and may even limit a legitimate user from accessing protected data in the network. The research available on the use of generative models for attacking continuous authentication is relatively scarce. This paper explores the feasibility of bypassing continuous authentication using generative models, measuring the impact of the damage, and advising the usage of metrics to compare the various advertised attacks in such a system. Our empirical results demonstrate that generative models cause a higher Equal Error Rate and misclassification error in attack scenarios. At the same time, training and detection time during attack scenarios is increased compared to perturbation models. The results prove that data samples crafted by generative models can be a severe threat to continuous authentication schemes using motion sensor data.