100 years of film production has developed a sophisticated style of directing the camera and choosing the right perspective. Movement and positioning of the camera help to focus the attention of the viewer and increase the emotional impact of the film. In videogames this style for positioning the camera was impossible because of the interactivity of games. This is the case although the perspectives and the look of films have been created in cut-scenes of games for more than 10 years.

Automoculus tries to learn the rules of the dramaturgy of camerawork by utilising algorithms like support vector machines and transfers its findings from the training data taken from decades of film history to the interactive plot of games. It also optimizes the position of the game-camera to capture actions and emotions of the virtual actors. You can see the automatically positioned camera in the following three videos which show situations typical of games:

Tecnically Automoculus uses Blender to define the scene and the animation in 3D. JetBrains MPS is applied for defining the script for the scene and the training-data and Python with the libraries Scipy and skLearn is used for optimizing the position of the camera and the machine-learning. All parts of the software are combined with the help of the scripting-interface of Blender.

The whole thesis can be downloaded as pdf here (in German). The sourcecode and the training-data are available in this Github-Repository.

For the assessments of the thesis (graded excellent) see the German version of this page.