Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Jan 22, 2007

Location-tracker in Second Life

Via 3DPoint

 

SLStats comes in the form of a wristwatch, available in Hill Valley Square [in SL] in the Huin sim. Once you register with the service in-world, the watch “watches” where you go, tracking your location as you move around the world, as well as which other avatars you come into contact with. The information is used on the SLStats site to rank most popular regions (among SLStats users, of course), and to track how much time you’ve spent in-world, which you can view at a link like this one, which tracks Glitchy: http://slstats.com/users/view/Glitchy+Gumshoe.
 

Jan 15, 2007

Stereo and motion parallax cues in human 3D vision

Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

J Vis. 2006;6(12):1471-85

Authors: Rauschecker AM, Solomon SG, Glennerster A

In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.

Avatar

re-blogged from 3pointD.com

 

 

09CAME~1.png

 

 

 

Filmmaker James Cameron of Titanic fame (and, probably more importantly to readers of this blog, The Terminator), has just gotten the go-ahead on his next film. What interests 3pointD about this is the fact that it will be filmed in a moviemaking version of a virtual world, and new details of the process have emerged in a story in today's New York Times [Computers Join Actors in Hybrids On Screen]. Cameron is using the latest "performance-capture" technology to record the movements of actors' bodies, as well as their facial expressions. But such recordings are usually made against a blank background that's later filled with a digitally produced environment. In the case of Avatar, Cameron's next film, "The most important innovation thus far has been a camera, designed by Mr. Cameron and his computer experts, that allows the director to observe the performances of the actors-as-aliens, in the film's virtual environment, as it happens," the Times writes.

The key phrase here is "as it happens." Cameron and his team have essentially created a virtual world that they view live as the performances are recorded. What they see on their screen is the motion-capture already composited into the digital environment, rather than having to wait until later to see the combination of the two streams of content. In addition, Cameron can pan and zoom around on the fly: "If I want to fly through space, or change my perspective, I can. I can turn the whole scene into a living miniature and go through it on a 50 to 1 scale. It's pretty exciting," he says. That's exciting technology indeed. Though it bears little direct impact on current multiuser virtual worlds, it's the kind of technology that will gradually filter down to broader levels, and the kind of filmmaking that could help promote Internet-based 3D spaces. Will the movie be any good? Who knows. The filmmaking techniques, however (which almost resemble the ultimate in machinima), are fascinating. And don't forget that Cameron sits on the Multiverse advisory board. 

 

Jan 09, 2007

CFP - Interacting with Immersive Worlds

Via Future Cinema Course 

An International Conference presented by the Interactive Arts and Science Program - Brock University, St. Catharines, Ontario

JUNE 5-6, 2007

Keynote Speakers:

Mihaly Csikszentmihalyi, Director of the Quality of Life Research Center at the Drucker School, Claremont Graduate University

James Paul Gee, Tashia Morgridge Professor of Reading, University of Wisconsin at Madison (sponsored by Owl Children’s Trust and the Brock Research Institute for Youth Studies)

Chris Csikszentmihalyi, Director of the Computing Culture group at the MIT Media Lab

Denis Dyack, Director/President, Silicon Knights 

The primary focus of this conference is to explore the growing cultural importance of interactive media. All scholarship on digital interactive media (such as computer games, mixed realities and interactive fiction), as well as users (including adults and children), will be considered in one of four broad conference streams:

Theory of Immersive Worlds explores: i. the theory of interactivity, from perspectives such as narrative and gameplay (ludology); ii. analyses of the cultural and psychological effects of immersive worlds.

Creative Practices in Immersion examines interactive new media art, and its exploration of new idioms and challenges in immersive worlds.

Immersive Worlds in Education examines the application of immersive technologies to teaching and learning.

Immersive Worlds in Entertainment examines entertainment applications of immersive technologies.

Visit the conference website for details

A model of (en)action to approach embodiment

Via VRoot

A model of (en)action to approach embodiment: a cornerstone for the design of virtual environments for learning


Virtual Reality Journal, Springer London, Volume 10, Number 3-4 / December, 2006, Pages 253-269.

Author: Daniel Mellet-d’Huart

This paper presents a model of (en)action from a conceptual and theoretical point of view. This model is used to provide solid bases to overcome the complexity of designing virtual environments for learning (VEL). It provides a common grounding for trans-disciplinary collaborations where embodiment can be perceived as the cornerstone of the project. Where virtual environments are concerned, both computer scientists and educationalists have to deal with the learner/user’s body; therefore the model provides tools with which to approach both human actions and learning processes within a threefold model. It is mainly based on neuroscientific research, including enaction and the neurophysiology of action.

Second Life client source code now available

 LindenLab has announced the availability of the Second Life client source code

Users can download the code, inspect, compile, modify, and use within the guidelines of the GNU GPL version 2

 

continue reading

22:53 Posted in Virtual worlds | Permalink | Comments (0) | Tags: virtual worlds

Jan 07, 2007

Imaging Place SL: The U.S./Mexico Border

Re-blogged from Networked Performance

ARS_imaging.png

Imaging Place SL: The U.S./Mexico Border by John (Craig) Freeman: Jan 5 - Feb 23, 2007: Ars Virtua: Gallery 2: Opening 7 - 9pm SLT(Pacific Time) Friday January 5, 2007. Go there

"Imaging Place," is a place-based, virtual reality art project. It takes the form of a user navigated, interactive computer program that combines panoramic photography, digital video, and three-dimensional technologies to investigate and document situations where the forces of globalization are impacting the lives of individuals in local communities. The goal of the project is to develop the technologies, the methodology and the content for truly immersive and navigable narrative, based in real places. For the past several months, Freeman has been implementing the "Imaging Place" project in Second Life.

When a denizen of Second Life first arrives at an Imaging Place SL Scene he, she or it sees on the ground a large black and white satellite photograph of the full disk of the Earth. An avatar can then walk over the Earth to a thin red line which leads to an adjacent higher level platform made of a high resolution aerial photograph of specific location from around the world. Mapped to the aerial images are networks of nodes constructed of primitive spherical geometry with panoramic photographs texture mapped to the interior.

The avatar can walk to the center of one of these nodes and use a first person perspective to view the image, giving the user the sensation of being immersed in the location. Streaming audio is localized to individual nodes providing narrative content for the scene. This content includes stories told by people who appear in the images, theory and ambient sound. When the avatar returns to the Earth platform, several rotating ENTER signs provide teleports to other "Imaging Place" scenes located at other places within the world of Second Life. In "Imaging Place SL: The U.S./Mexico Border," Freeman explores the issues, politics and personal memories of this contested space.

LIVE PERFORMANCE by Second Front: Friday, January 05, 2007 - 7 PM PST Second Front is the first dedicated performance art group in Second Life. To officially open JC Fremont's Installation at Ars Virtua, Second Front will be creating a realtime interpretive and site-specific performance based on JC Fremont's theme 'Borders' to compliment "Imaging Place SL: The U.S./Mexico Border."

Dec 29, 2006

NeuroNet

Today the International Association of Virtual Reality Technologies has announced the creation of NeuroNet, defined as “a first generation network created specifically for the transmission of real-time, virtual reality (VR) and gaming data"

From the press release:

The network, called the Neuronet, will evolve into the world's first public network capable of meeting the data transmission requirements of emerging cinematic and immersive VR technologies. The Neuronet will be separate and distinct from the Internet and will be used for everything from gaming to entertainment to 'v-business', or virtual business.

The massive overcapacity of fiber optic cable left over from the dot-com era makes the new network feasible with minimal investment. Much of the infrastructure and programming utilized to facilitate the Neuronet will be outsourced to telecommunications and virtual reality innovators, but a private sector monopoly on the Neuronet itself will not serve the greater good of the global community. Competing networks have the potential to destabilize evolving virtual worlds and potentially compromise consumer safety. To that end, IAVRT was formed as an international not-for-profit organization that will, through its members, govern the Neuronet, foster its growth and guard its integrity. 
 

Sounds cool... I'll keep an eye on it

Dec 23, 2006

Effects of VR distraction on pain, fear, and distress

Effects of distraction on pain, fear, and distress during venous port access and venipuncture in children and adolescents with cancer.

J Pediatr Oncol Nurs. 2007 Jan-Feb;24(1):8-19

Authors: Windich-Biermeier A, Sjoberg I, Dale JC, Eshelman D, Guzzetta CE

This study evaluates the effect of self-selected distracters (ie, bubbles, I Spy: Super Challenger book, music table, virtual reality glasses, or handheld video games) on pain, fear, and distress in 50 children and adolescents with cancer, ages 5 to 18, with port access or venipuncture. Using an intervention-comparison group design, participants were randomized to the comparison group (n = 28) to receive standard care or intervention group (n = 22) to receive distraction plus standard care. All participants rated their pain and fear, parents rated participant fear, and the nurse rated participant fear and distress at 3 points in time: before, during, and after port access or venipuncture. Results show that self-reported pain and fear were significantly correlated (P = .01) within treatment groups but not significantly different between groups. Intervention participants demonstrated significantly less fear (P <.001) and distress (P = .03) as rated by the nurse and approached significantly less fear (P = .07) as rated by the parent. All intervention parents said the needlestick was better because of the distracter. The authors conclude that distraction has the potential to reduce fear and distress during port access and venipuncture.

Dec 22, 2006

Second Life avatars consume as much electricity as Brazilians

From 3Dpoint

Nick Carr calculates that a Second Life avatar consumes as much electricity as a Brazilian:

If there are on average between 10,000 and 15,000 avatars "living" in Second Life at any point, that means the world has a population of about 12,500. Supporting those 12,500 avatars requires 4,000 servers as well as the 12,500 PCs the avatars' physical alter egos are using. Conservatively, a PC consumes 120 watts and a server consumes 200 watts. Throw in another 50 watts per server for data-center air conditioning. So, on a daily basis, overall Second Life power consumption equals... 60,000 kilowatt-hours....

Which, annualized, gives us [an average avatar consumption of] 1,752 kWh. So an avatar consumes 1,752 kWh per year..... [T]he average citizen of Brazil consumes 1,884 kWh, which, given the fact that my avatar estimate was rough and conservative, means that your average Second Life avatar consumes about as much electricity as your average Brazilian.

Which means, in turn, that avatars aren't quite as intangible as they seem. They don't have bodies, but they do leave footprints.

00:25 Posted in Virtual worlds | Permalink | Comments (0) | Tags: second life

Smoking cues in a virtual world provoke craving in cigarette smokers

Smoking cues in a virtual world provoke craving in cigarette smokers.

Psychol Addict Behav. 2006 Dec;20(4):484-9

Authors: Baumann SB, Sayette MA

Twenty smoking-deprived cigarette smokers participated in a study to test the ability of smoking cues within a virtual world to provoke self-reported craving to smoke. Participants were exposed to 2 virtual-reality simulations displayed on a computer monitor: a control environment not containing any intentional smoking stimuli and a cue-exposure environment containing smoking stimuli. At various points, participants rated their urge to smoke on a scale of 0-100. Results indicated that baseline urge ratings were equivalent in both conditions, but the maximum increase in urge ratings was significantly higher in the cue-exposure environment than in the control environment. This is comparable to what in vivo studies have reported, but with the advantage of simulating more naturalistic and complex settings in a controlled environment.

Dec 03, 2006

Geovirtual reality for sharing information

From Emerging Technology Trends

Engineers and computer scientists at West Virginia University's GeoVirtual Laboratory (GVL) have developed what they called the VRGIS solution - short for 'virtual reality geographic information systems.' The VRGIS project combines several technologies, such as virtual reality (VR), location based services (LBS), and geographic information systems (GIS) with 'the power of the Internet to provide people with a portal to dynamically share information in a revolutionary new way.'

Automatic display of complex data in Second Life

Via 3D Point

1

SL resident Turner Boehm has developed an application that allows to automatically model the links and nodes of a complex system using SL objects. The picture above shows multiple interconnecting software systems represented by an automatically generated set of prims, based on information stored outside SL.

 

Multiverse

From 3D point

Multiverse, is a free virtual-world development platform, which has just gone from closed to open beta.

From the Multiverse website:


In July 2004, a team of Netscape veterans founded The Multiverse Network, Inc., a company aiming to become the world’s leading network of Massively Multiplayer Online Games (MMOGs) and 3D virtual worlds. Multiverse has pioneered a new technology platform designed to change the economics of virtual world development by providing independent game developers with the resources they need to enter and compete in the $2 billion online game market.

When Multiverse's team of world-class engineering and business professionals worked at Netscape in the very early days, they helped architect the Internet-based platforms now used by hundreds of millions of people worldwide. Other ground-breaking companies they have made significant contributions to include Borland, Silicon Graphics, Excite, and Netflix. The full Multiverse team also includes video game industry veterans.

Multiverse's unique technology platform will change the economics of virtual world development by empowering independent game developers to create high-quality, Massively Multiplayer Online Games (MMOGs) and non-game virtual worlds for less money and in less time than ever before. Multiverse solves the prohibitive challenges of game creation by providing developers with a comprehensive, pre-coded client-server infrastructure and tools, a wide range of free content--including a complete game for modification--and a built-in market of consumers. The Multiverse Network will give video game players a single program--the Multiverse Client--that lets them play all of the MMOGs and visit all of the non-game virtual worlds built on the Multiverse platform.

For the first time, indie developers will have the opportunity to create the virtual worlds they've been dreaming about. And many of these new worlds will attract players who are completely ignored by today’s MMOG publishers.

Virtual reality in medical and psychiatric education

Virtual reality, telemedicine, web and data processing innovations in medical and psychiatric education and clinical care.

Acad Psychiatry. 2006;30(6):528-33

Authors: Hilty DM, Alverson DC, Alpert JE, Tong L, Sagduyu K, Boland RJ, Mostaghimi A, Leamon ML, Fidler D, Yellowlees PM

OBJECTIVE: This article highlights technology innovations in psychiatric and medical education, including applications from other fields. METHOD: The authors review the literature and poll educators and informatics faculty for novel programs relevant to psychiatric education. RESULTS: The introduction of new technologies requires skill at implementation and evaluation to assess the pros and cons. There is a significant body of literature regarding virtual reality and simulation, including assessment of outcomes, but other innovations are not well studied. CONCLUSIONS: Innovations, like other uses of technology, require collaboration between parties and integration within the educational framework of an institution.

Education about hallucinations using an internet virtual reality system: a qualitative survey

Education about hallucinations using an internet virtual reality system: a qualitative survey.

Acad Psychiatry. 2006;30(6):534-9

Authors: Yellowlees PM, Cook JN

OBJECTIVE: The authors evaluate an Internet virtual reality technology as an education tool about the hallucinations of psychosis. METHOD: This is a pilot project using Second Life, an Internet-based virtual reality system, in which a virtual reality environment was constructed to simulate the auditory and visual hallucinations of two patients with schizophrenia. Eight hundred sixty-three self-referred users took a self-guided tour. RESULTS: Five hundred seventy-nine (69%) of the users who toured the environment completed a survey. Of the survey responders, 440 (76%) thought the environment improved their understanding of auditory hallucinations, 69% thought it improved their understanding of visual hallucinations, and 82% said they would recommend the environment to a friend. CONCLUSIONS: Computer simulations of the perceptual phenomena of psychiatric illness are feasible with existing personal computer technology. Integration of the evaluation survey into the environment itself was possible. The use of Internet-connected graphics environments holds promise for public education about mental illness.

Ogoglio project

From the project's website (via 3D point):

The Ogoglio project is exploring shared online worlds in the context of web enabled work. If World of Warcraft is "the new golf", then we're exploring "the new business district".

We are creating spaces where people will meet with remote coworkers, collaborate using new tools, integrate existing business applications, and enjoy the benefits of being in the same office with people in different time zones.

Project summary:

Many people now start their careers with a deep understanding of cooperative 3D communities like World of Warcraft, but when they enter the workplace they are handed email and a shared calendar and are expected to teach themselves how to be productive. Instead of forcing people to limit their communication to these thin channels, The Ogoglio project will provide community workspaces with the planning and coordination tools which have proven to be effective in multiplayer online games.

 Ogoglio City Map

 

Link to the Ogoglio blog

 

Nov 29, 2006

Virtual reality applications for the remapping of space in neglect patients

Virtual reality applications for the remapping of space in neglect patients.

Restor Neurol Neurosci. 2006;24(4-6):431-41

Authors: Ansuini C, Pierno AC, Lusher D, Castiello U

Purpose: The aims of the present article were the following: (i) to provide some evidence of the potential of virtual reality (VR) for the assessment, training and recovery of hemispatial neglect; (ii) to present data from our laboratory which seem to confirm that the clinical manifestation of neglect can be improved by using VR techniques; and (iii) to ascertain the neural bases of this improvement. Methods: We used a VR device (DataGlove) interfaced with a specially designed computer program which allowed neglect patients to reach and grasp a real object while simultaneously observing the grasping of a virtual object located within a virtual environment by a virtual hand. The virtual hand was commanded in real time by their real hand. Results: After a period of training, hemispatial neglect patients coded the visual stimuli within the neglected space in an identical fashion as those presented within the preserved portions of space. However it was also found that only patients with lesions that spared the inferior parietal/superior temporal regions were able to benefit from the virtual reality training. Conclusions: It was concluded that using VR it is possible to re-create links between the affected and the nonaffected space in neglect patients. Furthermore, that specific regions may play a crucial role in the recovery of space that underlies the improvement of neglect patients when trained with virtual reality. The implications of these results for determining the neural bases of a higher order attentional and/or spatial representation, and for the treatment of patients with unilateral neglect are discussed.

Nov 24, 2006

Mobile virtual worlds

prompted by Layla Nassary Zadeh, I wanted to understand something more about the possibility of implementing augmented reality on mobile devices. Much to my surprise, this field is more advanced than I expected.

For example, a team of researchers from Nokia's Mobile Augmented Reality Applications (MARA) project has created a prototype phone that makes objects in the real world hyperlink to information on the Internet. Using the phone's built in camera, a user can highlight objects on the mobile phone's LCD and pull in additional information about them from the Internet. Moreover, by altering the orientation of the phone, the display will toggle between live view and satellite map view. In map view, nearby real world objects are highlighted for convenient reference.

maramap

The prototype consists of Nokia S60 platform phone and attached external sensor box providing position and orientation information to the phone via a Bluetooth connection

picture of sensor box

This video of downtown Helsinki shows some Virtual Object - associated landmarks, and demonstrates the automatic switching between Augmented Reality mode and Map mode that happens when the user alternates between holding the phone vertically and horizontally.

MARA was demonstrated at the fifth IEEE and ACM International Symposium on Mixed and Augmented Reality in Santa Barbara in October.

Nov 08, 2006

Ars Virtua Artist-in-Residence (AVAIR)

Re-blogged from Networked performance

avair1_001.jpg

Ars Virtua Artist-in-Residence (AVAIR): Call for Proposals: Deadline November 21, 2006: Ars Virtua Gallery and New Media Center in Second Life is soliciting proposals for its artist-in-residence program. The deadline for submissions is November 21, 2006. Established and emerging artists will work within the 3d rendered environment of Second Life. Each 11-week residency will culminate in an exhibition and a community-based event. Residents will also receive a $400 stipend, training and mentorship.

Ars Virtua Artist-in-Residence (AVAIR) is an extended performance that examines what it means to reside in a place that has no physical location.

Ars Virtua presents artists with a radical alternative to "real life" galleries: 1) Since it does not physically exist artists are not limited by physics, material budgets, building codes or landlords. Their only constraints are social conventions and (malleable-extensible) software. 2) The gallery is accessible 24 hours a day to a potentially infinite number of people in every part of the world simultaneously. 3) Because of the ever evolving, flexible nature of Second Life the "audience" is a far less predictable variable than one might find a Real Life gallery. Residents will be encouraged to explore, experiment with and challenge traditional conventions of art making and distribution, value and the art market, artist and audience, space and place.

Application Process: Artists are encouraged to log in to Second Life and create an avatar BEFORE applying. Download the application requirements here: http://arsvirtua.com/residence. Finalists will be contacted for an interview. Interviews will take place from November 28-30.