Projects Advanced Reality Lab (ARL)

Members of Advanced Reallity Lab team are among the best researchers and experts in their field. Our Research projects are ground-breaking and at the front of academic research in the fields of virtual and augmented reality.

Virtual Humans

 

Virtual humans are animated intelligent virtual agents that interact with you, typically in immersive VR. Our lab has developed a platform for virtual humans, which is being used for multidisciplinary research. The development of believable virtual humans encompasses a huge range of challenges, from photorealistic graphics and animation to high level AI. Clearly we only focus on a subset of these problems, and we are happy to collaborate with other labs or industry with complimentary skills.

 

Some social scenarios implemented with our virtual human platform:

 

 

 

 

Deep learning based multimodal communication

 

Recent breakthroughs achieved with "deep learning" applied to large datasets are highly relevant for the development of virtual humans. In 2018 we have started addressing some of the underlying challenges, in collaboration with Prof. Yaakov Hel-Or, Prof. Arik Shamir, Dr. Kfir Bar, and Dr. Shai Fine. The general framework is based on large multimodal datasets scrapped from the web and analyzed with seq2seq methods, aimed at creative generation of verbal and non verbal behavior.

 

On 22.9.19 we will have an informal small international workshop related to this topic.

 

An old and somewhat related paper:

Link: D. Friedman and M. Gillies. Teaching characters how to use body language using reinforcement learning, Proc 5th Intelligent Virtual Agents 2005, LNCS 3661, pp. 205–214, 2005. Springer-Verlag, Berlin Heidelberg 2005.

 

 

And a related demo of Half-Life 2 agents learning to combat using RL video.

 

Psychological and physiological responses to virtual humans

 

Over the years we have carried out studies evaluating the psychological and physiological (mostly autonomous nervous system – skin conductance, heart rate and derivatives, respiration, and head movements), and behavioral (mostly spoken language, both prosody and semantics) responses of VR participant to virtual humans.

 

Our virtual humans have been applied to studying flirting and romantic behavior, in collaboration with Prof. Gurit Birnbaum, and conflict resolution in the context of the Israeli-Palestinian conflict, in collaboration with Dr. Beatrice Hasler.

 

Papers:

Link: Chen, Y. R., Birnbaum, G. E., Giron, J., & Friedman, D. (2019, July). Individuals in a Romantic Relationship Express Guilt and Devaluate Attractive Alternatives after Flirting with a Virtual Bartender. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (pp. 62-64). ACM.‏

 

Link: B. Hasler, T. Shani, G. Hirschberger, D. Friedman, Virtual peacemakers: Mimicry increases empathy in simulated contact with virtual outgroup members, Cyberpsychology, Behavior, and Social Networking, 17.12, 766-771, 2014.

 

Link: B. S. Hasler and D. Friedman, Sociocultural conventions in avatar-mediated nonverbal communication: A cross-cultural analysis of virtual proxemics. Journal of Intercultural Communication Research (special issue on Intercultural New Media Research), 41(3): 239-260, 2012.

 

D. Friedman, A. Steed, and M. Slater, Spatial social behavior in Second Life, Proc. 7th Intelligent Virtual Agents LNAI 4722, Pelachaud et al. (eds), pages 252—263, Paris, France, September 2007.

 

Software:

Gal Gilbert MSc project: A toolkit for recording and analyzing spoken language in VR scenarios.

 

 

 

 

 

Hybrid representations: The proxy

 

Between the years 2010 and 2015 the lab took part in the EU FP7 project Beaming, aimed at social telepresence. Our role was to develop the AI proxy – automatically controlling your remote representation, either virtual or robotic. See below papers and videos demonstrating: automatic research assistants, being in three places at the same time, automatic nonverbal translation, automatically replacing you in class, a dual gender avatar, and more.

 

Papers:

Link: S. Kishore, X. Navarro Muncunill, P. Bourdin, K. Or-Berkers, D. Friedman, M. Slater, Multi-Destination Beaming: Apparently Being in Three Places at Once Through Robotic and Virtual Embodiment, Frontiers in Robotics and AI, 3(65), 2016.

 

Link: B. Hasler, P. Tuchman, and D. Friedman, Virtual research assistants: Replacing human interviewers by automated avatars in virtual worlds, Computers in Human Behavior, 29(4): 1608-1616, 2013.

 

Link: B. Hasler, O. Salomon, P. Tuchman, A. Lev-Tov, D. Friedman, Real-time gesture translation in intercultural communication, Artificial Intelligence & Society, pp. 1-11, 2014.

 

Link: D. Friedman, P. Tuchman, Virtual clones: Data-driven social navigation, H. H. Vilhjálmsson, S. Kopp, S. Marsella, K. R. Thórisson (Eds.), 11th Int'l Conf. Intelligent Virtual Agents, LNCS 6895 Springer pages 28-34, Reykjavik, Iceland, September 2011.

 

D. Friedman, O. Salomon, B. Hasler, Virtual substitute teacher: Introducing the concept of the classroom proxy,Proceedings of the 3rd European Immersive Education Summit (iED), pp. 186-197, London, UK, November 2013.

 

Link: D. Friedman, B. Hasler, The Beaming proxy: Towards virtual clones for communication, in Human Computer Confluence: Advancing our Understanding of the Emerging Symbiotic Relation between Humans and Computing Devices, Versita, pp. 156-174, 2016.