Login with your Social Account

Virtual Reality for Scientists

Virtual Reality for Scientists

When you think of virtual reality, or VR, you might conjure up images of action-packed video games or immersive tours of deep ocean waters and other exotic locale. But, in recent years, scientists have started to don VR goggles too—not for entertainment, but for analyzing and comprehending their data.

At Caltech, efforts to design VR tools for the future are underway, with prototypes in development for studying everything from worms to ocean waters to biomolecules and more.

“We are thinking about what doing science will look like 10 to 15 years from now,” says Santiago Lombeyda, a computational scientist at the Center for Data-Driven Discovery (CD3), the group behind the virtual reality research, as well as other data science projects. CD3 is a joint partnership between Caltech and JPL, which is managed by Caltech for NASA. “In the future, a scientist might be working on their desktop, and then they could just grab a pair of virtual reality glasses, or they may even be already wearing regular glasses that enable VR, and then start manipulating their data in the same shared visual context of their actual work area.”

“Billions of dollars have been poured into VR in the gaming industry. We can leverage their software and ask what it can do for science and scholarship,” says George Djorgovski, director of CD3 and a professor of astronomy at Caltech. Djorgovski has been working on developing VR tools for data analylsis for more than a decade and, in 2015, developed a related startup company called Virtualitics, which combines VR with machine learning.

The CD3 group’s latest project, in collaboration with the National Cancer Institute, is to develop better tools for finding tumors in diagnostic imaging scans. To that end, they have developed a virtual reality environment, using a Vive VR headset, where a user can visualize computed tomography (CT) scans from patients and identify possible tumors. According to the Caltech and JPL researchers, the 3-D virtual environment allows radiologists to better visualize and identify potential tumors than with the standard 2-D imaging methods available now.

“The 3-D environment can really have an impact in understanding three-dimensional structures,” says Lombeyda. “And that’s why it applies so well to looking for tumors. By walking around the data and seeing it from all sides, you might find things you wouldn’t otherwise see by just looking at 2-D slices of tissues.”

The group has begun working with several radiologists to identify possible tumors in patients. The idea is to then take this VR training data and feed it into machine-learning, or artificial intelligence (AI), programs, an effort being managed by Caltech’s Ashish Mahabal, lead computational and data scientist for CD3. Once those programs have learned how to better identify tumors, they could be used in the future to help medical professionals find candidate tumors for follow-up. “The better the training data, the better the machine-learning program,” says Djorgovski.

“There is a synergy with machine learning and virtual reality,” says Dan Crichton, director of the Center for Data Science and Technology at JPL, a partner organization to CD3. “We are training AI to see things, such as tumors, that otherwise might have gone missing. This can be a useful aid to medical practitioners.”

Crichton says that components of the software they are using for their VR programs were developed originally for astronomy and planetary imaging. “Anytime you look at data in three or more dimensions, it’s challenging,” he says. “When we study planetary surfaces, for example, we want to see more than just a one-dimensional slice. We want to see the depth and how other variables change. The principles we’ve learned from this kind of imaging apply to our VR programs.”

According to the scientists, the virtual reality environment offers scientists a more instinctive way to understand their data, whether it signifies an object such as a tumor or a molecule, or is represented in the form of a graph, with many variables plotted out.

“In a 3-D virtual environment, scientists have a more intuitive and longer-lasting grasp of the spatial relations of objects. In today’s world, more and more multidimensional data are becoming available, and these tools are improving the ability of humans to comprehend them,” says Djorgovski. “In the same way that we went from black-and-white photography to color photography, or from phone calls to Skype chats, people will not want to go back from virtual reality.”

In another application of the VR program developed by the CD3 group, in collaboration with Paul Sternberg, Bren Professor of Biology, a user has the ability to manipulate a model of a tiny transparent worm called Caenorhabditis elegans (C. elegans) using a tool that seemingly grasps the worm like a pair of tongs. When the program turns on, the worm first appears on a desk surface in a virtual office environment, at a fairly small scale. The user can then enlarge the worm by throwing it on the floor. And if the user wants to see a huge model of the worm, they can toss it out of a virtual window, where it blows up to a scale bigger than a human.

“We want researchers to come to us with data that we can then quickly prep for viewing in a VR space,” says Lombeyda. “Whatever they are studying can be seen at desktop scale, or they can drop it on the floor and walk around it.”

One of the challenges of developing VR tools that scientists will actually use lies in making the experience smooth, without jerky motions. VR programs can leave people feeling dizzy, and avoiding this is something that the CD3 group continues to work on. Lombeyda says that their overall goal is to make the experience as natural and seamless as possible for a scientist working at an office desk, so that they might be checking email and then could pop on a pair of VR glasses to quickly examine new data.

“Most of the time, scientists work at desks, so we want to optimize that experience,” says Lombeyda, adding that augmented reality, or AR, glasses, in which a viewer is only partially immersed in a virtual environment, may also become a common tool for scientists. But, he says, they are focusing on VR now in this early phase of development.

The group is also working on building VR classrooms, and last year they taught a course in a virtual space. VR classrooms can bring together students in different locations, and even across the globe, to understand and manipulate data in 3-D. According to Djorgovski, this kind of setup could also be used to improve teaching skills. For example, students in a VR classroom could anonymously hit a button if they were not comprehending a certain topic. “If the teacher were to get a few alerts, they would know at that point the students were confused and they needed to backtrack.This is something that can be done more easily and more quickly in VR versus traditional classrooms.”

Recently, the scientists presented their VR tumor program at a computer graphics conference, called SIGGRAPH, to favorable reviews. “Some of the comments we kept hearing were how people were excited to see VR for something other than gaming, and also how natural the experience was in our VR setup,” says Lombeyda.

Though the technology may still be in its early phases, the researchers are excited to press on. The future of VR, they say, is not just for fun and games but for collectively making sense of our world. “All sciences are undergoing the same transformations of having to deal with larger and larger data sets, and traditional tools won’t work,” says Djorgovski. “Instead of reinventing the wheel everywhere, scientific groups are developing new methodologies for big and complex data sets. Everybody faces the same problems so we have a great opportunity for sharing solutions and ideas.”

Alzheimer's detection by virtual reality

Virtual reality can spot navigation problems in early Alzheimer’s disease

Virtual reality (VR) can identify early Alzheimer’s disease more accurately than ‘gold standard’ cognitive tests currently in use, suggests new research from the University of Cambridge.

We’ve wanted to do this for years, but it’s only now that virtual reality technology has evolved to the point that we can readily undertake this research in patients

–Dennis Chan

The study highlights the potential of new technologies to help diagnose and monitor conditions such as Alzheimer’s disease, which affects more than 525,000 people in the UK.

In 2014, Professor John O’Keefe of UCL was jointly awarded the Nobel Prize in Physiology or Medicine for ‘discoveries of cells that constitute a positioning system in the brain’. Essentially, this means that the brain contains a mental ‘satnav’ of where we are, where we have been, and how to find our way around.

A key component of this internal satnav is a region of the brain known as the entorhinal cortex. This is one of the first regions to be damaged in Alzheimer’s disease, which may explain why ‘getting lost’ is one of the first symptoms of the disease. However, the pen-and-paper cognitive tests used in clinic to diagnose the condition are unable to test for navigation difficulties.

In collaboration with Professor Neil Burgess at UCL, a team of scientists at the Department of Clinical Neurosciences at the University of Cambridge led by Dr Dennis Chan, previously Professor O’Keefe’s PhD student, developed and trialled a VR navigation test in patients at risk of developing dementia. The results of their study are published today in the journal Brain.

In the test, a patient dons a VR headset and undertakes a test of navigation while walking within a simulated environment. Successful completion of the task requires intact functioning of the entorhinal cortex, so Dr Chan’s team hypothesised that patients with early Alzheimer’s disease would be disproportionately affected on the test.

The team recruited 45 patients with mild cognitive impairment (MCI) from the Cambridge University Hospitals NHS Trust Mild Cognitive Impairment and Memory Clinics. Patients with MCI typically exhibit memory impairment, but while MCI can indicate early Alzheimer’s, it can also be caused by other conditions such as anxiety and even normal aging. As such, establishing the cause of MCI is crucial for determining whether affected individuals are at risk of developing dementia in the future.

The researchers took samples of cerebrospinal fluid (CSF) to look for biomarkers of underlying Alzheimer’s disease in their MCI patients, with 12 testing positive. The researchers also recruited 41 age-matched healthy controls for comparison.

All of the patients with MCI performed worse on the navigation task than the healthy controls. However, the study yielded two crucial additional observations. First, MCI patients with positive CSF markers – indicating the presence of Alzheimer’s disease, thus placing them at risk of developing dementia – performed worse than those with negative CSF markers at low risk of future dementia.

Secondly, the VR navigation task was better at differentiating between these low and high risk MCI patients than a battery of currently-used tests considered to be gold standard for the diagnosis of early Alzheimer’s.

“These results suggest a VR test of navigation may be better at identifying early Alzheimer’s disease than tests we use at present in clinic and in research studies,” says Dr Chan.

VR could also help clinical trials of future drugs aimed at slowing down, or even halting, progression of Alzheimer’s disease. Currently, the first stage of drug trials involves testing in animals, typically mouse models of the disease. To determine whether treatments are effective, scientists study their effect on navigation using tests such as a water maze, where mice have to learn the location of hidden platforms beneath the surface of opaque pools of water. If new drugs are found to improve memory on this task, they proceed to trials in human subjects, but using word and picture memory tests. This lack of comparability of memory tests between animal models and human participants represents a major problem for current clinical trials.

“The brain cells underpinning navigation are similar in rodents and humans, so testing navigation may allow us to overcome this roadblock in Alzheimer’s drug trials and help translate basic science discoveries into clinical use,” says Dr Chan. “We’ve wanted to do this for years, but it’s only now that VR technology has evolved to the point that we can readily undertake this research in patients.”

In fact, Dr Chan believes technology could play a crucial role in diagnosing and monitoring Alzheimer’s disease. He is working with Professor Cecilia Mascolo at Cambridge’s Centre for Mobile, Wearable Systems and Augmented Intelligence to develop apps for detecting the disease and monitoring its progression. These apps would run on smartphones and smartwatches. As well as looking for changes in how we navigate, the apps will track changes in other everyday activities such as sleep and communication.

“We know that Alzheimer’s affects the brain long before symptoms become apparent,” says Dr Chan. “We’re getting to the point where everyday tech can be used to spot the warning signs of the disease well before we become aware of them.

“We live in a world where mobile devices are almost ubiquitous, and so app-based approaches have the potential to diagnose Alzheimer’s disease at minimal extra cost and at a scale way beyond that of brain scanning and other current diagnostic approaches.”

The VR research was funded by the Medical Research Council and the Cambridge NIHR Biomedical Research Centre. The app-based research is funded by the Wellcome, the European Research Council and the Alan Turing Institute.

Reference
Howett, D, Castegnaro, A, et al. Differentiation of mild cognitive impairment using an entorhinal cortex based test of VR navigation. Brain; 28 May 2019; DOI: 10.1093/brain/awz116

Materials provided by University of Cambridge