From mconway@microsoft.com Fri Dec 4 13:43:08 1998 Received: from burdell.cc.gatech.edu (root@burdell.cc.gatech.edu [130.207.3.207]) by lennon.cc.gatech.edu (8.9.1/8.9.1) with ESMTP id NAA12624 for ; Fri, 4 Dec 1998 13:43:05 -0500 (EST) Received: from mail5.microsoft.com (mail5.microsoft.com [131.107.3.121]) by burdell.cc.gatech.edu (8.9.1/8.9.1) with ESMTP id NAA29622 for ; Fri, 4 Dec 1998 13:43:03 -0500 (EST) Received: by INET-IMC-05 with Internet Mail Service (5.5.2232.9) id ; Fri, 4 Dec 1998 10:42:32 -0800 Message-ID: <4FD6422BE942D111908D00805F3158DF0D953BB5@RED-MSG-52> From: Matt Conway To: "'bowman@cc.gatech.edu'" , 3d-ui@hitl.washington.edu Subject: RE: VR '99 Tutorial on 3D Interaction Date: Fri, 4 Dec 1998 10:42:23 -0800 X-Mailer: Internet Mail Service (5.5.2232.9) Status: RO Hi Doug, just for accuracy's sake, "The Virginia Contingent" (sounds like a spy thriller) has been scattered to the winds -- I'm in Seattle at MS Research, Rich Stoakley, first author on the WIM paper is also here at MS doing user interface work. The rest of Randy Pausch's current students moved to Pittsburgh when Randy went to CMU. There's some UI work being done at Virginia, but none of it is interactive 3D, the last time I looked. Comments below. > ------------------------------------------------------------ > VR '99 Tutorial Outline - The Art and Science of 3D Interaction > > Presenters: > Doug Bowman > Ernst Kruijff > Joe LaViola > Ivan Poupyrev > > I. Welcome and Introduction [Doug - 30 mins.] > > A. administrative > -introduction of presenters > -demographics of attendees > -schedule > -handouts/materials > > B. motivation for course: possible application scenario(s) for > VEs that are not yet possible because of interaction problems > > C. Definitions of key terms > -VE > -interaction > -interface > -device > ... > Don't get bogged down in defining VR/VE. Religious war. I'd suggest just stating the technological assumptions and quickly moving on. > D. Themes of the course: guidelines and myths > > E. Interactively complex application domains > -architecture/CAD > -education > -manufacturing > -medicine > -others? > simulation & training entertainment design and prototyping information & scientific visualization > F. Universal interaction tasks > -navigation (travel & wayfinding) > -selection & manipulation > -system control (commands) that last one seems like a grab bag. I'd break out selection and manipulation. I'd also be specific about what you mean by manipulation. Traditionally, it could mean translation, rotation and scaling, but what does that also include more exotic operations like bending? breaking? warping? lathing? If these aren't universal, then why is object rotation? What about manipulation that is graphical but non-spatial, like color changes or texture manipulation? > > G. Goals of interaction design > -performance (efficiency, accuracy, productivity, ...) > -usability (ease of use, ease of learning, > intuitiveness, ...) > -usefulness (system meets its goals - > interaction technique > promotes learning or presence or spatial > understanding - "skill transfer is a good word" > interface is relatively transparent so users > can focus on tasks) Obvious, probably, but these are all generically good things in UI design. If I were taking this course, I'd wonder what it iwas about 3D UI that made these particular design goals harder to achieve. Of course, if you're just going to hit these points quickly as a way of reminding the audience what the goal here is, then that's fine. I'd peek at Jacob Neilson's Usability Engineering for a concise list of good characteristics of interaction. > > II. Input and Output (hardware) [Joe - 1 hour] > > A. Output devices > -visual: HMD, CAVE, Desk, Workbench, Stereo Monitor i'd ask the critical question: what are the tradeoffs? when does stereo help? What are CAVES like when you're in the cave, but standing very far from the person being tracked? > -audio: spatial sound i'd ask the critical question: when does spatialized sound matter? > -tactile and haptic output (really a topic for > another tutorial) > -Main Question: how do these different devices > affect the design > of interfaces and interaction techniques (esp. > visual devices)? good. also might mention motion platforms. > > B. Input devices > -distinction between input device and > interaction technique > -trackers > -mice, joysticks, other button devices > -tablets magnetic ultrasonic hybrid > -gloves - gestures, pinch gloves > -speech there's a good place to talk about "weird devices too" To me VR is an exciting place to work because it forces you to ask the question: how do i interact with a computer if I absolutely cannot use a keybaord and a mouse? (though there are some mad scientists who have tried doing KB input inside an HMD.) Given the limitations, most labs I know have hand-crafted some very creative devices - balls and wands and hinges with trackers and encoders, often held together with duct tape and a prayer. For example, at Virginia, we used Tyco (tm) brand Race car controllers as a VR input device in place of data gloves. We did this for several reasons: - data gloves cost 50K at the time. we didnt have the dough. - data gloves sucked for a variety of other reasons, not the least of which was the short duty cycle of the gloves. After 200 hours of operation, the fibers running down the fingers would cloud up, which would mean very little light would reach the optical sensors at the fingertips. The glove would therefore report that the user's hand was always clenched. Glove users came to call this "virtual arthritis". Other devices: we used the mattel power glove for a while. (Yes, really.) It had its own problems. Kevin Christiansen, working under Randy Pausch, recently fashioned a thing called the "Data Blow" which is a baseball-cap mounted weather vane. The user can blow air across this and provide input that way. Granted, this is a very specialized device, but was perfect for the application for which it was designed: A VR entertainment simulation where the VR participant "becomes" Godzilla and is allowed to trash a city. Yes, the datablow was the input device that gave you the atomic breath action. Very Cool. The point is, don't be limited by off-the-shelf technology. > -results of evaluation of these devices > -Main questions: What interaction techniques > are afforded by > different devices? Which devices are the most > flexible? Are > there optimal pairings of input and output > devices? Which tasks > does each device apply to? > god, this sounds really ambitious. A person could talk about just trackers for an hour! Again, talking about tradeoffs is important. > ****FIRST COFFEE BREAK**** > > III. Interaction Techniques (software) > > A. Selection and Manipulation [Ivan - 30 minutes] > -within reach vs. at-a-distance WIM-like interactions and Image-plane based interactions tend to blur the distinctions between these two things. I think of it as a false dichotomy. Much more fundamental is selection and manipulation of things that are in-sight vs. those that are out of sight. > -various types of metaphors > -results of evaluation > > B. System Control [Ernst - 30 minutes] > -menu systems > -pen & tablet interaction > -magic lenses > -results of evaluation again, a huge topic. voice? > > C. Navigation > -wayfinding [Ernst - 20 minutes] > -types of wayfinding > -spatial orientation / spatial understanding > -maps, cues, and other aids > -results of evaluation > -travel [Doug - 20 minutes] > -types of travel > -travel metaphors > -results of evaluation > > D. 2D interaction in a 3D world [Joe - 20 minutes] > -advantages of 2D interaction > -limitations of 2D interaction > -seamless integration and transition of 2D and 3D > > ****LUNCH**** > > IV. The Process of Design and Evaluation > > A. The art of interaction design [Ivan - 30 minutes] > -design based on intuition > -design based on the real world > -design based on a back story > -naturalism vs. magic in the VE interface I'd also suggest that there are design opportunites that open up when technological breakthroughs happen. > > B. The science of interaction design [Doug - 30 minutes] > -formal categorization of techniques > -taxonomy > -guided design - holes in a design space > -design based on models of performance > > V. Design activity [Attendees - 30 minutes] > > A. Design task - application scenario and requirements > > B. Groups of 5 or so discuss and develop interaction > design [15 mins.] > > C. Groups present designs and discuss rationale [15 mins.] > > ****SECOND COFFEE BREAK**** > > VI. Example Interfaces and Applications > > A. Everyone takes 15 minutes each to describe > applications they've > done or that others have done, how they fit the guidelines we've > talked about - show video if at all possible > > B. Additional examples from attendees or questions for the > presenters [15-30 mins.] > I'd STRONGLY move this earlier or integrate it in to the previous talks. Putting all teh video examples at the end sounds like it will make the rest of the discussions dry and somewhat hard to follow because there wont be a concrete example to tie things to. hope this helps, best, -- Matt ___________________________________________________ Matt Conway User Interface Research Group Microsoft Research http://www.research.microsoft.com/~mconway/