From poup@mic.atr.co.jp Wed May 27 07:52:11 1998 Received: from burdell.cc.gatech.edu (root@burdell.cc.gatech.edu [130.207.3.207]) by lennon.cc.gatech.edu (8.8.4/8.6.9) with ESMTP id HAA03297 for ; Wed, 27 May 1998 07:51:58 -0400 (EDT) Received: from wheaten.hitl.washington.edu (mpiqSh7ebY2a4wHWT/wbyMToA3TEUsyT@[128.95.73.60]) by burdell.cc.gatech.edu (8.8.4/8.6.9) with ESMTP id HAA29006 for ; Wed, 27 May 1998 07:51:54 -0400 (EDT) Received: from mailhost.mic.atr.co.jp (mic.atr.co.jp [133.186.20.201]) by wheaten.hitl.washington.edu (8.8.8/8.6.12) with ESMTP id EAA08652 for <3d-ui@hitl.washington.edu>; Wed, 27 May 1998 04:51:46 -0700 (PDT) Received: from pop.mic.atr.co.jp by mailhost.mic.atr.co.jp (8.8.8+2.7Wbeta7/3.6W) id UAA05766; Wed, 27 May 1998 20:51:15 +0900 (JST) Received: from mic.atr.co.jp by pop.mic.atr.co.jp (8.8.8+2.7Wbeta7/3.6W04/07/98) id UAA04544; Wed, 27 May 1998 20:51:14 +0900 (JST) Message-ID: <356CDDFC.B6673B4C@mic.atr.co.jp> Date: Wed, 27 May 1998 20:46:04 -0700 From: Ivan Poupyrev Organization: MIC Labs, ATR International X-Mailer: Mozilla 4.04 [en] (Win95; I) MIME-Version: 1.0 To: 3D UI List <3d-ui@hitl.washington.edu> Subject: Re: UI design in a new medium References: <199805261452.KAA07654@lennon.cc.gatech.edu> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Status: RO Interesting discussion, I also have few points to throw in it... Why when we talk about 2D interaction we almost always talk about WIMP/GUI interfaces? Moreover, we also almost always talk about the specific details of WIMP: pull down menus, icons, mouse and etc. Using 2D in 3D is often viewed as a problem of getting pull down menus in 3D world. It seems to me there is much more in 2D interaction in 3D ... First of all, there is nothing inherently 2D in WIMP interface model. >From the interface semantics point of view WIMP "paradigm" simply sets a hierarchy of generic interface elements, their purpose, common and specific properties and behaviors that are consistent across the interface and applications. The 2D visual representations of these elements which are used in Star/Apple/Windows/X/Motiff implement their semantics in nice and elegant form, however, I believe that semantic is key in WIMP: you can implement menu as pull down menu, 3D tree, cubes embedded in a cube (like in Division), building with windows and etc. etc. and still it will be menu with the same operations and similar properties. From this point of view WIMP interface model can be and probably should be used in 3D interfaces; the task of using it can be formulated a) developing consistent visual representations of the basic elements within 3D environment and b) developing set of interaction techniques to invoke their behaviors (they in turn can use voice, gesture, direct manipulation, eye direction, touch screens, 2D tablets, brain waves or whatever) c) Empirical and human factors evaluation. In fact I have a technical report that was looking at different ways of generalizing WIMP for 3D interfaces, if anyone is interested I can send a copy. Second, talking about 2D interaction in 3D, what do we really mean when we talk about 2D interaction in 3D environments? If we define 2D interaction as manipulation of 2DOF instead of 3, then whether some technique is 2D or 3D depends on how we define DOF. This is an example. Say, we have a tilted desk with objects which we want to move along the desk surface. From the world-centered point of view this is a 3D task and 3D techniques should be used: indeed we have to manipulate 3DOF to move an object. However, if we define DOF relatively to the surface of the desk the task is 2D and 2D techniques should be used. I believe constraints _in the world_ is one of the ways of making 3D interaction - 2D. Another example - Jeff's image plane sticky finger technique: the user selects objects by touching their projections on imaginary image plane located in front of the user. From the world-centered point of view this can be seen as a 3D technique, since user have to manipulate 3DOF to select an object. However, let's consider coordinate system with the user in the center and define objects positions as distance and direction to them. Next, let's build one image plane for _all_ objects in environment at once - "super" image plane (take an "image plane" integral over the virtual world, if you will :) ). Resulted "super" image plane will be not a plane but some sort of sphere with the user in the middle and objects around projected on this sphere. From this point of view the task of selecting objects using image plane technique becomes essentially 2D, since manipulating distance to the object is not needed, only direction to the object is important. It can be argued that all pointing techniques are essentially 2D techniques from the user-centered perspective and I have some experimental data to support this claim. Well, these are just few points from this side of Pacific ... Ivan -- Ivan Poupyrev [poup@isl.hiroshima-u.ac.jp/poup@hitl.washington.edu] Researcher, MIC Lab, ATR International, Japan 0774-951432] Ph. D. Candidate, ISL, Hiroshima University, Japan 0824-212959] Visiting Scientist, HITL, University of Washington, US 206-6161474] http://www.hitl.washington.edu/people/poup]