Through the refinement of approaches to applying visual augmentation to virtual environments, the possibilities of utilising this process in robotics continues to be explored. However these endeavours have only yielded a collaborative relationship between the two technologies, where the augmented reality (AR) component is separate from the robot functionality. Regarding telepresence robots, research has shown that their perceived usefulness wanes in comparison to that of video conferencing tools as they do not offer a level of immersion and modified visual experience to justify their use. Therefore this paper contributes a novel virtually augmented robot (VAR) model that embeds AR functionality into telepresence, through the use of augmentation and facial recognition sub-models and algorithms. Experimental results show that using pulse width modulation (PWM), duty cycles, digital signals and accelerometer angles provides an approach for stabilising robot motion and the transference of human to robot head movement. The results also show the benefit of having a robot model with intrinsic AR functionality, as it offers a greater level of immersion and visual control of an environment.