The Immersive Environments Lab (IEL) offers a high-end large multimodal display for visualization and research through the use of commodity level technology. Using a kit-of-parts approach, the IEL is able to continually improve its capabilities to follow the trends of newer visualization and interaction technology. The lab strives to incorporate newer technologies to enable and explore better ways to provide features like 3D visualization in the context of the built environment and spatial sciences. The lab acts as a scheduled presentation and testing space for both classes and research.
The IEL is a high-end large multimodal display for visualization and research. The lab strives to incorporate newer technologies to enable and explore better ways to provide features like 3D visualization.
The IEL utilizes 3 (8'x6') Stewart Filmscreen's Techscreen 150 with 3 front-surface mirrors to display 2D and 3D content. Currently, there are 3 BenQ MW632ST capable of running 3D content at 1280x800 in a 4:3 aspect ratio.
A dedicated research machine resides within the IEL, a Dell Precision 7810 with an Intel Xeon Processor (3.0GHz), 8GB of storage, and an NVIDIA Quadro M5000 8GB video card with a stereo pin (DIN connector). It currently runs the SALA windows 7 base image. The IEL currently runs Active Stereo using OpenGL Quadbuffered through Volfoni's RF emitter and 10 pairs of active glasses.
IEL includes an Interactive Tabletop prototype that consists of a semi-immersive environment based on a stereoscopic multi-touch surface combined with a Microsoft Kinect depth camera. This camera tracks the user's head, enabling a real-time personalized 3D perspective view of the contents shown on the table. A 3D television set combined with a 3D active shutter glasses enables stereoscopic visualization, sending 1920x1080 pixel resolution images to each eye 60 times per second. This setup allows rendering high definition virtual objects as if they were lying above the table surface. The touch-enabled surface, using a multi-touch frame capable of detecting up to 10 touches, allows interacting with virtual models through gestures.