CAMeRA

Quick links to:

Tech Labs Overview

Game Cella'

Robo Lab

Spacer

Spacer

Spacer

White

Virtual Room Ribbon

 

Virtual Room

 

The Virtual Room is currently in its final stages of testing and deployment. Work began on the Virtual Room began in 2010 and is expected to be completely live and usable by summer 2011.

The Virtual Room Tech Lab will be able to provide researchers with a high quality virtual reality environment that is easily accessible to all, regardless of design experience. As research is conducted, CAMeRA, researchers, and affiliate organizations will further expand on the Virtual Room’s library of shared 3D objects.

 

Using the Logos3D powered engine, researchers will have access to CAMeRA’s already existing repository of 3D objects, animations, interactions, and terrains. This system has been designed to allow all to build a solid VR environment. Researchers will be able to utilize the drag-and-drop interface to create any type of layout, complete with avatar interactions and responses. Additionally, the Virtual Room environment will be usable across a continuum of displays, providing the flexibility to meet many research needs in terms of realism and immersion. CAMeRA's Virtual Room tech lab is focused on research and the applications of using a highly-customizable rendered world to push forward interdisciplinary knowledge.

 

CAMeRA helps provide you with the hardware, software, space, and expertise to meet your research needs including:

 

Rendering
> 3D objects to be placed within the environment
> Animations for the surroundings such as people walking or a burning fire
> Terrains and textures to make buildings, surfaces, and your world come to life
> The drag-and-drop interface makes the actual rendering accessible for all researchers and allows for your creativity to be the only limit to what is created
Interactions
> You have the power to control how people and objects interact with participants; if an avatar crosses a certain radius from a character or object, it reacts
> Use the pre-existing library or easy-to-learn user-friendly scripting language to program your own interactions
> For example, you can set an environmental police officer to begin interacting with participants as they approach. Additionally, you can set up the officer to respond and react differently based upon the race of the participant and/or other characters in the environment.

Display Mechanisms

customize the level of immersion your participants experience and flexibility for operationalization and deployment by choosing from multiple options:

> 2D screens
> 3D screens
> 2D and 3D projectors/beamers
> Head-mounted, motion-detecting display (for a fully-immersive experience)

Hardware & Additional Tools

Depending on your research aims, CAMeRA can work with you to develop additional items to meet your needs. Previous research has developed:

> Interface tools with which to interact with the environment (e.g., a stationary bicycle was developed that would interact with the virtual environment and respond to turns and changing slope and terrain)
> Response-measurement tools (e.g., feedback devices that measured changing biometrics in response to the virtual environment)
   

This list is by no means exhaustive or static. CAMeRA is continuously expanding its resource repository as the list of partner researchers and projects proceeds to grow over time.

 

If we do not currently have the tools that you are looking for, CAMeRA will gladly discuss with you means by which we can cooperatively aquire them. We are a community of researchers and want to be able to build the largest possible bank of resources for all types of new media research. Please contact us so that we can begin to consider your specific needs and what a partnership with CAMeRA will look like.

 

 

 

Some highlighted examples of projects that have taken place at the Virtual Room in its preliminary phases:

Future U
 

The Future U project will take images of participants and age them into their 60s and 70s, thus rendering a future “you” for each. Participants will then be fully immersed into a virtual environment using a 3D headset. As the participants move through the room, they will come upon a mirror portraying the aged image. They can interact with the room and mirror, and ultimately must confront their future selves.

 

CAMeRA partnered with researchers and external affiliates to develop and share the virtual environment, aging technology, head-mounted display, and space in which to conduct the study.

Future U Projection

     
Metaverse1
  CAMeRA participates in the EU funded project Metaverse1 to help provide a standardized global framework that enables the interoperability between virtual worlds (as for example Second Life, World of Warcraft, IMVU, Google Earth and many others) and the real world (sensors, actuators, vision and rendering, social and welfare systems, banking, insurance, travel, real estate and many others). The ‘Metaverse for all’ will be a special attention point aiming at the eInclusion of minorities in the society.

An overview of the Metaverse1 bicycle interface

 

A more in-depth view into the mechanics of the Metaverse1 bicycle project