From: owner-3dui@hitl.washington.edu on behalf of Julien Berta [jberta@fakespacesystems.com] Sent: Thursday, October 04, 2001 11:38 AM To: 3d-ui@hitl.washington.edu Subject: RE: text/number input in VEs Hi all, First of all, I'm new to the list, however, we have other staff that watch your conversations closely and find them helpful for our development. Sorry if it takes time for my posts to adjust to the local habits. I'm sure you guys don't really want commercial posts and my apologies for slipping this in, but this thread of conversation is so close to what is being developed at Fakespace Systems these days, that I thought I'd share some of it. We are working on a product, due out in the first quarter of 2002, (project name FIE, for Fakespace Interaction Engine) that some of you might have seen at Siggraph (we gave a technology demo there). FIE is comprised of software and hardware components. The hardware is a connection hub and PC where you hook all your input devices. FIE can drive another computer's input (Irix, HPUX, win32), by acting as a keyboard, a mouse or a Magellan device. First, the most obvious application is that you can define buttons on VR input devices (wands, gloves and such) to trigger keyboard events on a target image generator computer. Another possibility is to use a tracked device, and use it as a virtual pointer to drive the mouse position on a large screen. A tracked device can also be used to emulate a Magellan device, providing 3d tracked interaction to apps that don't traditionaly support 3D devices. Finally FIE has a voice recognition engine, that lets you drive button events (custom commands) or drive a 6DOF transformation (fixed grammar). Typically if you attach the latest to the Magellan emulator you can 'talk to the model' with commands like "translate up", "rotate left" and others to control 3D attitude by voice. Previously in the conversation it was said the keyboard was the best tool for the job of entering values. Although I don't agree, the fact is that all commercial apps today do things this way. The idea here for us was to 'workaround' this limitation, by enabling high-end interaction devices in most existing apps. We have also discovered some interesting things relating to using a tracked device to control the mouse position. If the app provides a 'virtual trackball interface' (examine mode), driving it with a tracked device "feels" 3D (when it's really 2d input). Almost like projecting 3DOF on a plane and from the plane back on a sphere to get a DOF back somehow tricks the brain into thinking all 3 DOF are still here. Anybody experienced with this approach ? If anyone is interested in seeing more information about our FIE development you can email me at jberta@fakespacesystems.com or jangelillo@fakespacesystems.com Thanks to all, Julien. _________________ Julien Berta Software Engineer Fakespace Systems _________________