From kenh@microsoft.com Fri May 8 17:04:54 1998 Received: from burdell.cc.gatech.edu (root@burdell.cc.gatech.edu [130.207.3.207]) by lennon.cc.gatech.edu (8.8.4/8.6.9) with ESMTP id RAA21234 for ; Fri, 8 May 1998 17:04:52 -0400 (EDT) Received: from wheaten.hitl.washington.edu (Mxi6gPR1eE8LmfF0e6Gozh76KkY6Vyaa@[128.95.73.60]) by burdell.cc.gatech.edu (8.8.4/8.6.9) with ESMTP id RAA22383 for ; Fri, 8 May 1998 17:04:51 -0400 (EDT) Received: from mail5.microsoft.com (mail5.microsoft.com [131.107.3.31]) by wheaten.hitl.washington.edu (8.8.8/8.6.12) with ESMTP id OAA28086 for <3d-ui@hitl.washington.edu>; Fri, 8 May 1998 14:04:44 -0700 (PDT) Received: by INET-05-IMC with Internet Mail Service (5.5.1960.3) id ; Fri, 8 May 1998 14:04:34 -0700 Message-ID: <5F68209F7E4BD111A5F500805FFE35B905797712@red-msg-54.dns.microsoft.com> From: Ken Hinckley To: "3DUI (E-mail)" <3d-ui@hitl.washington.edu> Subject: doug's navigation taxonomy Date: Fri, 8 May 1998 14:04:29 -0700 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.1960.3) Content-Type: text/plain; charset="iso-8859-1" Status: RO Hi folks, This message is mostly intended for Doug but I guess everyone else can eavesdrop :) Doug, I thought your motoric approach to the navigation taxonomy was interesting. I actually believe this kind of taxonomy should live side-by-side with the task-oriented approach that Matt was suggesting. They look at the issues at different levels of granularity and thus should be good for reasoning about different aspects of the design space. A MUST-READ paper for you is: Buxton, W., "Chunking and Phrasing and the Design of Human-computer Dialogues," Proc. IFIP 10th World Computer Congress, ed. by Kugler, H. J. Amsterdam: North Holland Publishers, 1986, 475-480. This is a terrific model of input that Buxton has proposed. The examples are geared towards desktop input devices, but the methodology can be applied directly to 3D devices and virtual environments. (For example, I used this approach to describe my doll's head system; for example see the Task Analysis section of http://www.research.microsoft.com/ui/kenh/papers/BimanRef.ps). The great insight of this paper is that the "elemental" tasks of an interaction technique depend entirely on the level of analysis that you choose (and this level is bound by the capabilities of the input device(s) you choose). For example, pointing at something on a computer screen is often thought of as an elemental task. This is true with a mouse; but if you only have arrow keys, then pointing is really a compound task consisting of quantify X and quantify Y subtasks. You can apply the same approach with increasingly complicated tasks (for example, specifying a line might consist of specifying two endpoints. This is two pointing tasks with a mouse, but could be one compound task in a two-handed interface with a pair of pointing devices). Your navigation taxonomy has this same flavor; I suspect that if you look at it more carefully you'll see that the steps that you're proposing are just one possible way to divide up what is really a hierarchy of interrelated tasks. This would let you have a general description of the motoric steps for navigation that is independent of the particular devices used to implement it. Hope this helps some. Ken Ken Hinckley Microsoft Research One Microsoft Way Redmond, WA 98052 (425) 703-9065 kenh@microsoft.com