As 3D virtual environments become more realistic, there is an increasing demand for more immersive interactive narratives, but it is difficult to come up with camera control techniques that can contribute to this because of the unplanned nature of the storytelling. In response to this, therefore, we developed a fully automated system that constructed a cinematically expressive movie in real-time, using a sequence of low-level narrative elements such as events, key subjects, key subject motions, and actions.
Our process used a viewpoint space partitioning technique in 2D. This identified characteristic viewpoints of relevant actions, for which we computed the partial and full visibility. These partitions, or “Director Volumes”, provided a full characterisation over the space of viewpoints. We then built on this spatial characterisation to select appropriate Director Volumes and reason over them to perform appropriate camera cuts. We relied on traditional path-planning techniques to perform transitions. This approach to cinematic camera control was a contrast to other systems, which were mostly procedural, only concentrated on isolated aspects such as visibility, transitions, editing or framing, or did not allow for variations in directorial style.
Part of the IRIS project.