3D User Interfaces: Theory and Practice

Part III: 3D Interaction Techniques

In Part II, we presented information about the input and output device technologies that make 3D interaction possible. However, designing good interaction devices is not sufficient to produce a usable 3D interface. In Part III, we discuss interaction techniques for the most common 3D interaction tasks. Remember that interaction techniques are methods used to accomplish a given task via the interface, and that they include both hardware and software components. The software components of interaction techniques are also known as control-display mappings and are responsible for translating information from the input devices into associated system actions that are then displayed to the user (see the introduction to Part II). Many of the techniques we present can be implemented using a variety of different devices; the interaction concept and the implementation details are what make them unique.

We organize Part III by user interaction task. Each chapter describes a task and variations on that task. Techniques that can be used to complete that task, along with guidelines for choosing among the techniques, are discussed. We also provide implementation details for some important techniques.

The implementation issues are described in English in combination with mathematical notation (using the mathematical concepts described in Appendix A). We decided not to provide code or pseudocode for several reasons. Code would have been extremely precise, but we would have had to choose a language and a toolkit or library on which the code would be based. Since there is currently no standard development environment for 3D UIs, this code would have been directly useful to only a small percentage of readers. Pseudocode would have been more general, but even with pseudocode, we would be assuming that your development environment provides a particular set of functionality and uses a particular programming style. Thus, we decided to use both natural and mathematical languages. This choice ensures precision and descriptiveness, and allows each reader to translate the implementation concepts into his or her own development environment.

Chapter 5 covers the closely related tasks of selection and manipulation. We begin with these tasks because they have been widely studied, they are fundamental aspects of 3D interaction, and techniques for these tasks form the basis for many other 3D interaction techniques. Chapters 6 and 7 relate to the task of navigation, which is movement in and around an environment—a fundamental human task. Navigation includes both travel (Chapter 6) and wayfinding (Chapter 7). Travel is the motor component of navigation—the low-level actions that the user makes to control the position and orientation of his viewpoint. Wayfinding is the cognitive component of navigation—high-level thinking, planning, and decision making related to user movement.

System control tasks are the topic of Chapter 8. This interaction task addresses changing the mode or state of the system, often through commands or menus. Finally, Chapter 9 covers symbolic input, the task of entering or editing text, numbers, and other symbols. These two tasks have not been as heavily researched as manipulation, travel, and wayfinding, but they are nonetheless important for many 3D UIs.

Chapter 5: Selection and Manipulation
Chapter 6: Travel
Chapter 7: Wayfinding
Chapter 8: System Control
Chapter 9: Symbolic Input


Devoted to the design and evaluation of three-dimensional user interfaces