DCS861D: The User Interface from Front to Back
|
Immersion
|
Virtual Reality
Definition - VR is a computer generated,
interactive, three-dimensional environment in which a person is immersed.
- This implies:
- Requires a high performance
computer graphics system to provide an adequate level of realism.
- The virtual world is interactive.
A user requires real-time response from the system to be able to interact
with it in an effective manner.
- The last point is that the user
is immersed in this virtual environment. This usually means that the user
wears a head mounted display.
Virtual Reality Interface Devices
HMD - head mounted display
- Best known approach to VR
- Coupled with head tracking
- Stereo binocular view of the virtual world often with
stereo audio
- By virtue of tracking the viewing position (the head)
and orientation in the physical world, the view and perspective of the
virtual are consistent with what one would experience in the physical
world from the same actions.
- Permit some means of input, such as a dataglove or
some other high degree of freedom input to support interaction with
the displayed virtual world.
- stereo display, much like a pair of glasses that
provides a view into the virtual world.
- The physical form of these “glasses” can range from
something on the scale of a motorcycle helmet to a pair of sunglasses
- Great variety in display quality
- Goal is to provide the widest field of view at the
highest quality and with the least weight and at a reasonable cost.
- Issues:
- HMDs cover eyes
- Virtual world seen at the expense
of the physical.
- Users cannot directly see:
- their hands or the devices that
they are controlling
- objects or other people who are
in their immediate physical environment
- Solution:
- Some representation of physical
world entities must appear in the virtual
- Mount one or more video cameras
onto the HMD and feed the signals to the displays
Surround Environments
Cave based VR
- User functions within a room on which one or more of
the surfaces (walls, floor, ceiling …) is the display
- Some or all of the walls of a room are
rear-projection stereo displays
- User wears glasses to enable viewing of stereo images
- Since glasses are transparent, one can see the
physical as well as the virtual world
- Computer generated objects appear to enter into the
physical space of the Cave itself, where the user can interact with them
directly
- User’s head position is tracked within the Cave so
that what is displayed preserves proper perspective, etc., in adapting to
movements and change of location of gaze
- Issue:
- Two people in cave - both are
viewing the same displays, preventing each from own “point of view.”
- Both viewers look at different
things and different directions, but do so as if from the perspective of
the current location of the head tracker.
- Some mechanism for interacting with what is scene.
1992/93 - EVL's Cave (http://www.evl.uic.edu/home.html)
Cruz-Neira, Sandin, DeFanti,
Kenyon and Hart
Electronic Visualization Lab - University of Illinois at Chicago Circle
- Generic Immersive VR Environment
- A theater 10x10x9 feet, made up of three
rear-projected screens for walls and a reflective projection for the
floor.
- High resolution, high bandwidth, short persistance
CRT projectors throw full-color workstation fields (1024x768 stereo) onto
the screens, giving approximately 3,000 linear pixel resolution to the
surrounding composite image.
- Computer-controlled audio provides a sonification
capability to multiple speakers.
- User's head and hand orientation and position are
aquired using an Ascension tracking system with tethered electromagnetic
sensors.
- Stereographics' LCD stereo shutter glasses are used
to separate the alternate fields going to the eyes.
- SGI InfiniteReality Engine is used to create the
imagery that is projected onto the walls and floor.
VEMovie.mov
Trimension
Cabin - Fully enclosed cave
Image Construction : 5 - 6 Cameras
Domes and Walls
SkyVision
Full-Dome (SkySkan.com)
- Sloping seats, zenith position.
- Different projector orientations.
Imax
- Primary goal is to fill peripheral vision.
- 16:9 aspect ratio format
Personal Domes
Elumens Visionstation (VS) - Hemispherical
Display Elumens Visionstation (VS3)
Elumens Visionstation (VS3)
Depth: 8'-10" (106")
Height: 7'-9" (93")
Width: 11'-4" (136")
Partial Immersion
Panoramic -
Curved Walls
- Typically use RGB projectors (Varing focal depth)
- Edge blending (On computer or with video hardware)
- Small "sweet" spot.
Trimension (others - Barco)
MultiWalls
- How to synchronise multiple devices (Computer, DVD)
- Multiple pipe graphics cards (Cost)
- Seams between walls (Aligned, edge blending)
- Front vs back projection (Space)
Three Wall - CAEV - Melbourne
University
Two Wall - PortaWedge - Melbourne
U.
Tables, Desks, Single Screens
Immersadesk 2 - EVL
Fakespace
Trimension
The Grip Project - UNC Chapel Hill - Molecular Visualization
The NanoManipulator - UNC
Chapel Hill
http://www.cs.unc.edu/Research/nano/
movie
Responsive Workbench - 1993
- 3D interactive workspace originally developed by
Wolfgang Krueger at GMD
- Computer-generated stereoscopic images are projected
onto a horizontal tabletop display surface via a projector-and-mirrors
system, and viewed through shutter glasses to generate the 3D effect
- A 6DOF tracking system tracks the user's head, so
that the user sees the virtual environment from the correct point of view.
- A pair of gloves and a stylus, also tracked by the
system, can be used to interact with objects in the tabletop environment.
http://www.multires.caltech.edu/~rwbdemo/
Low Cost Stereo Table - Pace University (CAM)
Cubby - Delft
Stereoscopic Displays
Mount Fuji Stereo Pairs
Free
Viewing
Cross Eyed Viewing
Left
Eye
Right
eye
Left Eye
5DT HMD 800
Display
Resolution: 800x600x3(rgb) pixels - Full SVGA
Optics Field
of View: 28 (H) x 21 (V) degrees
Headphones:
Type:
Sennheiser HD 25 closed dynamic headphones
Frequency
Responce: 16Hz - 22Hz (3dB)
- Shutter Glasses (Active Stereo)
Crystal Eyes (Stereographics)
- Polarized Glasses (Passive Stereo)
Autostereographic Display
Dresden 3D (also
Stereographics and others) - 15" - 18"
MultiMo3D , Heinrich-Hertz-Institut für Nachrichtentechnik, Germany
Principles and prototype of a 50-inch projection-type dual lenticular screen
3D display.
Volumetric Display -
Actuality Systems
Image Size and
Display Type
- Approx.
10" diameter spherical image
- Swept-screen multiplanar
volumetric display
- Autostereoscopic: no viewing
goggles
- Volume-filling imagery
- Supports many simultaneous
viewers – no head-tracking
Generic VR System
User Inputs
Tracker - Head position and
Orientation
- Determines
viewpoint of virtual world
- tracking of orientation angles
- move up, down, side to side,
and rotate (yaw, pitch and roll)
Datagloves - Hand position and orientation
- Monitors status of user's fingers
Measures each finger flexure and
the orientation (pitch and roll) of the user's hand
Haptics - Force Feedback
Rendering (Computer Graphics)
- Object Model
- Lighting Model
- Camera Model
Wireframe
models
Wireframe models with hidden lines
Ambient
Illumination
Faceted Shading
Gouraud
Shading
Phong Shading
Phong Shading - Polygon
Meshes
Phong Shading - Bicubic Patches
Advanced
Illumination
Texture Mapping
Bump
Mapping
Reflection Mapping
Camera Model
Focal Lengths and Angles of
View
35mm Camera
|
Focal Length(mm)
|
Angle of View (Degrees)
|
Extreme Telephoto
|
800
|
3.5
|
|
400
|
6.0
|
|
200
|
12.5
|
Moderate Telephoto
|
135
|
18.0
|
|
85
|
29.0
|
|
50
|
46.0
|
Normal
|
43
|
53.0
|
Moderate Wide Angle
|
24
|
84.0
|
Wide Angle
|
18
|
94.0
|
Unit Cube at 5 units from image plane
Camera held fixed with different angles of view
(Extreme Wide Angle, Wide Angle, Normal,
Telephoto)
Camera position adjusted maintaing constant image size
(Extreme Wide Angle, Wide Angle, Normal,
Telephoto)
Hidden Surface
Object Space
Painter's
Algorithm - Depth Sort
Image Space
z-buffer (depth buffer)
Illumination Model
Surfaces in real world
environments receive light in 3 ways:
- Directly from exisiting light sources such as the sun
or a lit candle
- Light that passes and refracts through transparent
objects such as water or a glass vase
- Light reflected, bounced, or diffused from other exisiting
surfaces in the environment
Local (Simple)
Material
Models
- ambient light
- diffuse light
- specular light
- Phong model
Simple
Shading Model
- Objects under the influence of light
- Deficiencies
- point light source
- no interaction between objects
- ad hoc, not based on model of
light propagation
- Benefits
- fast
- acceptable results
- hardware support
Ambient Light
Diffuse Reflection
Light from the light source is
sent in everyu direction
Object appearance independent of
viewer position
Only depends on relative position
of light source
Diffuse + Ambient
Specular
Perfect Reflector (Mirror)
Imperfect Reflector - Phong Model
Global
Ray Tracing
www.povray.org
Radiosity
Radiosity Method
- From field of thermal engineering to account for
radiative heat transfer
- Foundation - conservation of radiative energy in a
closed environment
- First applied to computer graphics in 1984 at Cornell
and Hiroshima University
- Calculates lighting effects of ideal diffuse
reflections
- Other rendering techniques use a directionless "
ambient lighting "
Radiosity methods
- are three-dimensional object space algorithms that
solve for intensities at disrcete points or areas on modeled surfaces, not
for pixels on a 2D image plane.
- create solutions independent of camera location or
orientation.
- make all surfaces capable of reflecting or emitting
light energy
- Radiosity methods compute specular reflections and
refractive transparencies as a second pass using ray-traced specular
reflections and transparencies
One-Pass
Two-Pass
Radiosity Procedure
1. Modeled world is broken into a finite number
of N discrete patches
2. Radiosity equation used to relate patches
3. N simultaneous equations solved
iterartively using Gauss-Seidel method
4. The radiosity equation uses of the
following:
Energy that leaves Surface_A
and strikes Surface_B is attenuated by 2 factors:
- The physical relationship between Surface_A and Surface_B (known as
the form factor).
- The reflectivity of Surface_A (some light will be absorbed and not
reflected to Surface_B).
5. Form factors are
dimensionless quantities that describe the radiative exchange between 2
surfaces based on the geometric relationship within their virtual environment
Software
VR Toolkits
Cavernsoft
Toolkit for High
PerformanceTele-Immersive Collaboration
- Audio streaming.
- Basic avatar classes without graphics.
- Navigation and collision detection.
- Menus.
- Pick and Move
- Collaborative widget interface.
- Collaborative framework for animating data
sets.
- LIMBO- basic collaborative framework for
building other collaborative applications.
- Manipulative coordinate system class for
programming transformations.
VRJuggler
Open Source Virtual
Reality
- Scalable from simple desktop systems like
PCs to complex multi-screen systems running on high-end work
stations and super computers.
- Development environment supports many VR
configurations including desktop VR, HMD, CAVE(TM)-like devices, and
Powerwall(TM)-like devices
Graphics APIs
- Java3D
(http://java.sun.com/products/java-media/3D/)
- OpenGL (www.opengl.org)
Hardware
Graphics Chip
Nvidia GeForce4
|
Ti 4400
|
Triangles per
Second:
|
125Million
|
Fill
Rate:
|
4.4 Billion AA
Samples/Sec.
|
Operations
per Second:
|
1.12 Trillion
|
Memory
Bandwidth:
|
8.8GB/Sec.
|
Maximum
Memory
|
128MB
|
Graphics Card
3DLabs
|
Wildcat III 6210
|
Triangles per
Second:
|
33.0 Million 3D
Gourad-shaded triangles, Z-buffered
|
Fill
Rate:
|
400.0 M pixels/sec
(Trilinear fill)
|
3D Vectors,
solid-color, 10-pixel
|
26.1 M vec/sec
|
Total Memory
|
128MB frame buffer + 32 MB
DirectBurst memory
256MB 3D texture memory
416 MB
|
Other
|
Stereo Sync
Multiview and genlock support
3D volumetric texture support
|
SGI
SGI
|
InfiniteReality3TM
Graphics
|
Fill Rate:
|
896 M pixels/sec
|
Graphics Memory
|
256MB 3D texture memory
z-buffer ?
|
Other
|
Stereo Sync
Genlock support
Up to 16 graphics pipes
|