From mconway@microsoft.com Fri May 1 17:59:32 1998 Received: from burdell.cc.gatech.edu (root@burdell.cc.gatech.edu [130.207.3.207]) by lennon.cc.gatech.edu (8.8.4/8.6.9) with ESMTP id RAA20875 for ; Fri, 1 May 1998 17:59:30 -0400 (EDT) Received: from wheaten.hitl.washington.edu (kI2uSBuCZyKDWrE4SZ8aYd1vM6P0sDeS@[128.95.73.60]) by burdell.cc.gatech.edu (8.8.4/8.6.9) with ESMTP id RAA16990 for ; Fri, 1 May 1998 17:59:28 -0400 (EDT) Received: from mail2-b.microsoft.com (mail2-b.microsoft.com [131.107.3.124]) by wheaten.hitl.washington.edu (8.8.8/8.6.12) with ESMTP id OAA10893 for <3d-ui@hitl.washington.edu>; Fri, 1 May 1998 14:59:18 -0700 (PDT) Received: by mail2-b.microsoft.com with Internet Mail Service (5.5.2166.0) id ; Fri, 1 May 1998 14:58:47 -0700 Message-ID: <4FD6422BE942D111908D00805F3158DF05B2665D@red-msg-52.dns.microsoft.com> From: Matt Conway To: 3d-ui@hitl.washington.edu Subject: RE: Taxonomy of navigation techniques Date: Fri, 1 May 1998 14:58:45 -0700 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2166.0) Content-Type: text/plain; charset="iso-8859-1" Status: RO Doug, Thanks for the mail! I'm familiar with your work, yes. (Good stuff). Your definition of navigation is just fine, but I have some thoughts on making that definition somewhat broader. I believe there are nuances of motion that exist between motion control and wayfinding. That aside for the moment, it seems to me that the taxonomy listed below is concerned with *how* the camera is moved in terms of the data values that represent camera state (and the time derivatives of that state), where I've been thinking more in terms of a task-based model. Of course, I don't mean to imply that these two things are mutually exclusive. I like the way you've laid things out. My model is different, but not radically so. I start by asking why the user is moving the camera in the first place: to examine an object? To learn a route? To gain survey knowledge? To simply travel a route? To establish and/or maintain a particular view of an object? Once you answer these questions, then (I believe) you have a much stronger way of thinking about what degrees of freedom you should put in the hands of the user (and in what form), and which degrees of freedom you should allow to be managed by the system. I think it is vital that we keep separate in our heads (1) the user's goals (2) the abstract solution that will satisfy that goal and (3) the particular interface that implements the abstract solution. For example (pardon the vertical whitespace): User Goal Abstract Solution Implementation =================================================================== to fly Name the object voice command, to an object system chooses a trajectory -------------------- User chooses location off list -------------------- User clicks on object in scene ============================================ Specify Path Virtual joystick By Hand for velocity/direction Control -------------------- and so on. These things are all in the taxonomy tree you lay out below, just organized in a different manner. Parting shot: there should be a list of notes next to each of the abstract solutions that list the strengths and weaknesses of each solution. Some are good for gaining survey knowledge, others are better suited for situations where the user needs fine-grained control. As you can probably tell, this thinking of mine is still pretty new, so I should probably stop here and send out something more formal to the mailing list when it is better developed. I'll have more to say about my belief that there are "more nuances between navigating and wayfinding" at that point too. In closing, I absolutely agree with your observation that camera control and object manipulation interact in interesting ways and that a taxonomy of camera control will almost certainly have to have links out to a taxonomy of object manipulation. Matt -----Original Message----- From: bowman@cc.gatech.edu [mailto:bowman@cc.gatech.edu] Sent: Friday, May 01, 1998 11:03 AM To: 3d-ui@hitl.washington.edu Subject: Taxonomy of navigation techniques Hi everyone, This message is specifically for Matt Conway, but I thought it might interest you all as well. Matt, I read in your mail that you were looking to create a taxonomy of 3d/immersive navigation techniques. I wanted to make sure you knew about the work I've been doing in this area. First of all, we should define our terms. What is your definition of navigation? In my view, navigation is made up of two components: viewpoint motion control (or travel) and wayfinding. VMC (travel) is the motor aspect of navigation - how does one actually move from place to place? Wayfinding is the cognitive aspect - how does one plan a route or decide where to go next? I have been working on travel (but not wayfinding) techniques for some time now. If you haven't seen it, you should read our 1997 VRAIS paper "Travel in Immersive Virtual Environments..." (Bowman, Koller, and Hodges). It, among other things, includes a preliminary taxonomy of travel techniques. You can get this paper, as well as another one outlining a expanded design and evaluation framework, "A Methodology for the Evaluation of Travel Techniques..." on my Web page: http://www.cc.gatech.edu/gvu/people/Phd/Doug.Bowman/pubs.html At any rate, we were never completely happy with this original taxonomy, although it was useful in both design and evaluation. I've been working on a new taxonomy that attempts to solve some of the problems of the first one (it was not complete or orthogonal). I thought it would be useful for you to see this before you began working from scratch - that's one of the purposes of this group, not to duplicate effort. Below, I'll just include the main portion of the taxonomy that deals with setting the viewpoint position of the user (the main task involved in travel). -Indication of position --specify position ---discrete target specification (1) select object in environment (see selection taxonomy) select from menu/list enter coordinates position 3d cursor (see manipulation taxonomy) position 2d marker (e.g. on a map) (see manip.) automatic target selection ---one-time route specification (2) techniques for this?? e.g. set series of 3d markers (manip.) e.g. specify radius of curvature, length, other parameters e.g. specify series of targets, do spline between ---continuous specification (specify trajectory/direction) (3) gaze direction hand tracker direction physical props virtual controls 2D pointing --specify velocity ---discrete velocity selection (1) constant velocity select from menu/list enter numeric value voice command virtual controls physical props automatic ---one-time velocity profile specification (2) techniques for this?? e.g. enter series of numbers corresponding to various positions e.g. specify parameters, function of time or position, etc. automatic ---continuous specification(3) gesture (head, hand trackers) physical props virtual controls automatic --specify acceleration ---discrete acceleration selection (1) ---one-time acceleration profile specification (2) ---continuous specification(3) Interpret this as follows: -it's a tree, with the level indicated by the # of dashes -a number in parentheses means that this is a 1 of N choice at that level in that subtree -at certain points I refer to taxonomies for selection and manipulation - those are other taxonomies that I have been working on I hope this is readable - if not I can create a more pictorial version later. Let me know your thoughts on this and how we might be able to work together on some of these issues. Sorry for the long message. Doug -- Doug Bowman, Ph.D. Candidate College of Computing, GVU Center, Georgia Tech Room 388 CRB, (404) 894-5104 bowman@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Phd/Doug.Bowman/