From: Maarten van Dantzich [maartenv@MICROSOFT.com] Sent: Tuesday, October 05, 1999 3:51 AM To: 'Jerry Isdale'; 'Niklas Elmqvist' Cc: '3d-ui@hitl.washington.edu' Subject: RE: 3D window management Jerry Isdale said: > Very few 3d worlds do a decent job with text and when you try to > use the 2d app as a bit map, readabilty suffers greatly. Text is definitely a problem, especially if you expect the reader to read longer texts (webpages) in the environment or if you're doing 3D Info Viz and you want very legible labeling that's not too huge. Vector fonts are just grotty. (technical term. ;) One question is, how often do you want text to be _not_ screen-aligned when the user is likely to want to read it? Is all we need a mechanism for scaling & bypassing the fuzzing of trilinear filtering? Are geometry engines going to get powerful enough that you might do case-by-case rasterization of your outline font after warping under the perspective projection? (take TrueType font definition, create string object ("Foo") consisting of outline curves; warp object under accumulated transform; _then_ rasterize. Preserves hinting, etc., maximizing preservation of features. this is heinously expensive if your camera is moving, of course. Talisman, anyone?) We did a bit of work in the usability lab with text rotated around the vertical axis (Y), comparing a vector font against a trilinear filtered texture and an anisotropically filtered texture. Haven't gotten the result accepted for publication yet. (#include "reviewers-clueless.h" ;-) Admittedly it's a minor result, but it's an area that really needs some structural exploration... PhD topic, anyone? > One approach I'd like to see is using 2d hardware as direct sources for > bitmaps into the 3d world. Instead of feeding a video converter, the 2d > frame would feed (be sampled) into texture memory What exactly did you mean here? That you want to preserve 2D drawing acceleration (GDI acceleration, under Windows) and still texture map the result into a 3D scene? That's not so far-fetched. We've certainly discussed it here at MS as a desirable extension of the App Redirection architecture that we have implemented right now. It might not even be THAT hard, since unification of GDI DCs and DX Surfaces is happening in Win2000. You'd do it on a single video card, 2D apps using the GDI paths into a DC bitmap allocated in VRAM, which you then also access as a texture surface. The main issue is locking/synchronization or coming up with a clever double-buffering scheme that doesn't break the "update region" optimization in the GDI painting model. Of course the changes do have to happen at the low system level, and it's hard to get cycles from the Win2000 and Direct X teams for such far-out things... but not unimaginable. But not having 2D drawing accelaration is not the bottleneck right now. Even on a single processor P-II 400, we can run a bunch of live apps, redirect their drawing into memory bitmaps, blt the result into a texture, and achieve 20 fps or so. (1024x768x16bpp on NVidia TNT2, DX6, low polygon count.) Actually, that's 20fps update of the 3D scene with only partial update of some of the app windows on each frame; I haven't run any stress tests to see how much 2D drawing we can handle--but regular interactive app usage and even animation on web pages works fine.) It might be worth for the HitLab folks to stop by for a demo some time. Is anyone using the SGI Visual Workstation machines to get highly dynamic textures? > Let me put three wicked fast 3d accelerator boards in my PC and put it up > on three large (flat) screens). Yow. Are you ordering quad processor machines, too? :) One of the challenges that lies ahead for us is that our scene traversal/rendering engine has always assumed that it can eat up all available processor cycles, and always re-render the whole frame without regard to what actually changed. That CPU saturation is really not acceptable in shipping software; we're going to have to come up with lazier update models. There's cool research in that direction (and even an abandoned productization effort, see Talisman), but very little emerging in products because it's not something the game people really want. > Add in a few cameras to track my head and hands for gestures We toyed with camera-based head-tracking (George Robertson did, actually, in collaboration with the Vision group at MSR), and my personal feeling was that it was a neat idea, great demo, and practically unusable because of the lagginess. (Because the vision algorithms are fairly noisey you end up smoothing over a few frames & thus a laggy camera.) I believe there's some folks in Germany who've done this as well--at Siemens maybe? I haven't seen their system. >> Maarten.