REAL-TIME LIVE!

  • Full Conference Pass (FC) Full Conference Pass (FC)
  • Full Conference One-Day Pass (1D) Full Conference One-Day Pass (1D)
  • Basic Conference Pass (BC) Basic Conference Pass (BC)
  • Student One-Day Pass (SP) Student One-Day Pass (SP)

Date: Friday, December 7th
Time: 4:00pm - 4:15pm
Venue: Hall C (4F, C Block)


Summary: DreamFlux presents MR360 Live, a new way to create immersive and interactive Mixed Reality applications. It blends 3D virtual objects into live streamed 360 videos in real-time, providing the illusion of interacting with objects in the video. From a standard 360° video, we automatically extract important lighting details, illuminate virtual objects, and realistically composite them into the video. Our MR360 toolkit runs in real-time and is integrated into game engines, enabling content creators to conveniently build interactive mixed reality applications. We demonstrate applications for augmented teleportation using 360° videos. This application allows VR users to travel to different 360° videos. The user can add and interact with digital objects in the video to create their own augmented/mixed reality world. Using a live streaming 360° camera, we will travel to and augment the Real-Time Live! stage in front of a live audience.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Taehyun James “TJ” Rhee, is a professor at Victoria University of Wellington, New Zealand, and deputy director of the Victoria University’s Computational Media Innovation Centre, “CMIC”, a new institute for research & commercialisation, and a founder of mixed reality startup DreamFlux. Previously, 17 years at Samsung, as a principal researcher, leading computer graphics & mixed reality research.

Ian Loh is lead designer at CMIC and DreamFlux.

Lohit Petikam is a computer graphics PhD student at Victoria University. Previously at 8i, intern at OLM Digital, Weta Digital.

Ben is a developer at CMIC and DreamFlux.

Andrew Chalmers is a postdoctoral researcher at CMIC.

Date: Friday, December 7th
Time: 4:15pm - 4:30pm
Venue: Hall C (4F, C Block)


Summary: When entering a virtual world, the users expect an experience that feels natural. Huge progress has been made with regards to motion, vision, physical interactivity, whereas interactivity with non-playable character stays behind. This live-demo introduces a method that leads to more aware, expressive and lively agents that can answer their own needs and interact with the player. Notably, the live-demo covers the use of an emotional component and the addition of a layer of communication (speech) to allow more immersive and interactive AIs in VR.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Fond of technologies, Gautier Boeda did an internship at Force Field VR, working on a VR game "TERM1NAL", before graduating with a Master Degree in Computer Sciences. He has been selected to participate to Vulcanus In Japan, a cooperative program between Europe & Japan, where he did an internship at SQUARE ENIX CO., LTD. as an AI Engineer. He later joined the company to further work on character interaction and VR. Apart from work, he enjoys his free time by participating to game jams.

NA

Date: Friday, December 7th
Time: 4:30pm - 4:45pm
Venue: Hall C (4F, C Block)


Summary: We perform live replay movie creation using the recorded play data of Gran Turismo. Our real-time technologies enable movie editing such as authoring camerawork and adding visual effects while reproducing the race scene with high quality graphics from the play data. We also demonstrate some recent development for the future.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Masamichi Sugihara is a graphics programmer at Polyphony Digital Inc., developing graphics systems for Gran Turismo. Prior to joining Polyphony Digital Inc., he worked on rendering techniques and programing models for real-time graphics at Intel. He received his M.Sc. from University of Victoria, where his research topic was implicit surface modeling.

Hiroki Kashiwagi is a 3D artist and working on movie making and technical R&D at Polyphony Digital Inc. He started his career as a design visualizer at Nissan Design. After 12 years’ experience of car design industry, he moved to game industry. He loves vehicle design and is interested in visualizing beautiful industrial surfaces.

Tatsuya Matsue is a 3D artist and leading the movie creation team at Polyphony Digital Inc. After working on movie creation at Namco, he joined Polyphony Digital Inc. in 2000 and since then he has been making movies for the Gran Turismo series as well as developing techniques for making movies in the real-time domain.

Date: Friday, December 7th
Time: 4:45pm - 5:00pm
Venue: Hall C (4F, C Block)


Summary: We will demonstrate how a lifelike 3D avatar can be instantly built from a single selfie input image using our own team members as well as a volunteer from the audience. We will showcase some additional 3D avatars built from internet photographs, and highlight the underlying technology such as our light-weight real-time facial tracking system. Then we will show how our automated rigging system enables facial performance capture as well as full body integration. We will showcase different body customization features and other digital assets, and show various immersive applications such as 3D selfie themes, multi-player games, all running on an iPhone.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Dr. Koki Nagano is a principal scientist at Pinscreen and a research associate at the USC Institute for Creative Technologies. His work focuses on research and development in achieving realistic digital avatars including 3D face capture, facial animation and simulation, holographic displays, and deep learning. His research has won the DC Expo 2015 Special Prize and he was named a Google PhD Fellow 2016. His research was featured on various media outlets such as LA Times, Wall Street Journal, Wired, Gizmodo, and The Verge. He previously worked for Oculus Research and Weta Digital. He received his PhD in 2017 from the Computer Science program at the University of Southern California advised by Dr. Paul Debevec and his BE degree in 2012 from the Tokyo Institute of Technology.

Liwen is currently a fifth year Ph.D. student in the Department of Computer Science at the University of Southern California. His advisor is Prof. Hao Li. Before that, he obtained his M.S. degree from the University of Southern California in 2014 and B.S. degree from Zhejiang University in 2012. He works in the field of Computer Graphics. His research focuses on developing new data-driven techniques and modeling tools for the digitization of highly intricate geometric structures such as human hair as well as the acquisition of physical properties from captured data.

Lain a software engineer with an interest in graphics programming and a background in games from USC.

Date: Friday, December 7th
Time: 5:00pm - 5:15pm
Venue: Hall C (4F, C Block)


Summary: In this presentation we proposes and demonstrate the “REALITY" platform and associated software and hardware components to answer needs in the live entertainment sector and the Virtual Youtuber space. We demonstrate how off the shelf software and hardware components can be combined together to realize a dream in the animation of interactive virtual characters. This presentation is a collaboration between GREE, Inc., Wright Flyer Live Entertainment, Inc. (WFLE), IKINEMA and StretchSense. "REALITY" is a platform for VTubers and in this presentation we promote the concept of "Be yourself you want to be" with associated services like live entertainment broadcasting service and smartphone applications. The animation industry in Japan is mature, with numerous animation titles released and with a vast number of animation fans. The fans' way of enjoying is not limited to just watching animation and purchasing related goods, many people have the desire to become a "virtual hero" at events or at different platforms in Japan. Virtual character culture, VOCALOID, contents such as videos and live streaming are already present, and we believe that the virtual talent in 2D or 3D avatar is now easily accepted. Furthermore, we demonstrate that the concept can be now achieved with readily available and affordable solutions and setups from home. In addition, unlike animation, interactive bi-directional communication is possible as virtual talent responds to comments sent by audiences during live streaming and in turn Twitter accounts are updated frequently to respond to this. Fans feel fully engaged and fully interactive with their virtual idols. Interactive virtual characters that can be setup together by current consumer VR technologies, allow one to express much more openly via a virtual avatar without revealing appearances. The proposed REALITY platform allows easy creation and animation of different avatars and a opportunity for one to have multiple 3D avatar to represent personality and represent themselves in the virtual space. This clearly makes this new opportunity very appealing for a large audience. This presentation demonstrates the full concept and brings the features of the REALITY platform for VTubers and interactive virtual characters in general.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
He is a researcher in VR entertainment systems since 1995. Since 2018, he is working as a laboratory director of GREE VR Studio Lab, which promote a new industrial research for vTuber (Virtual YouTuber, virtual persona, virtual artist) enhancement and exploring future of live entertainment.

He was a technical producer in GREE VR Studio. Worked for "Fishing Star VR" and some VR arcade projects. Now he is a lead of Presence Technology group in Wright Flyer Live Entertainment, Inc., which develops REALITY Studio for VTuber broadcasting systems.

Todd is the Chief Technology Officer and co-founder of StretchSense, a company that specialises in accurate, soft sensors that turn regular clothing into comfortable motion capture suits for creating new user experiences based on precise human movement. Todd holds a PhD in smart artificial muscles from the University of Auckland, and leads the integration of sensor, garment, and software at StretchSense.

Atbin helps companies around the world to develop and create groundbreaking real time production pipelines and animation pipelines for interactive virtual avatars. He graduated from the University for the Creative Arts, UK and is currently a Technical Artist and and real-time production Advocate at IKINEMA

Alexandre founded IKINEMA in 2008. The technology behind IKINEMA was developed during Alexandre’s research on controlling satellites at the Surrey Space Centre, UK. Previously he worked at Philips Semiconductors as a systems architect. Alexandre holds a PhD from Cardiff University and he won the Royal Academy of Engineering Entrepreneur’s Award for 2010 for his work on IKINEMA. Alexandre is a Fellow of the Royal Academy of Engineering ERA.

Eiji Araki is a senior vice president at GREE and CEO in Wright Flyer Live Entertainment, Inc. He was appointed a director at GREE in September 2013. During his university years, Eiji co-founded several startups and after the sale of these businesses he joined GREE as its fourth regular employee in 2005. Eiji led the development of GREE's PC-based and mobile social networking services as well as several social games, including internationalization through Wright Flyer Studios (WFS). Eiji graduated with a degree in environment and information studies from Keio University in 2005.

Date: Friday, December 7th
Time: 5:15pm - 5:30pm
Venue: Hall C (4F, C Block)


Summary: Mimic Productions' CEO, Hermione Mitford, will present a live-stream demonstration of detailed facial animation in real-time, utilizing her photo-real 3D digital-double. The presentation will include a speech from Mitford (and her avatar) addressing Mimic's technological approach, as well as the corresponding applications for the technology. A specific focus will be placed on realism and the details of the human face.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Hermione Mitford is the Co-Founder/CEO of 3D character studio, Mimic Productions, and the Founder/Art Director of 3D creative agency, Synthetic Studio. During her 5 year tenure at Mimic Productions, Mitford has established, developed and grown a team who are at the cutting-edge of 3D human development. From Artifical Intelligence and Virtual Reality, to Cosmetics and Robotics, Mitford has made a point to embrace the emerging 3D markets, and by extension, develop Mimic's technology to suit the ever-changing demands of the industry and stay at the for-front of the field. This flexible and innovative approach has allowed Mimic to accommodate clients such as Warner Brothers, 2K and Nike, and talent such as Cristiano Ronaldo, Roger Federer, Marina Abramovic, Jeff Koons and Kanye West.

Date: Friday, December 7th
Time: 5:30pm - 5:45pm
Venue: Hall C (4F, C Block)


Summary: We propose two people on stage with two more persons located in Paris. We will then demonstrate remote work with real-time feedback. Each of us will be responsible for creating a short sequence. The goal is to show that with a real-time unified collaborative workflow (CUP® workflow) we can produce a rather complex movie in just a few minutes of work. We will demonstrate three key features of our solution on stage live. First the real-time collaborative editing and unified workflow: we create a 3D animation movie in a single application and modify each other’s work with instant feedback. Then we will load the created scenes on a smartphone that wewill be used as a virtual camera (with the help of augmented reality). Finally, we will show a version of the movie rendered with RTX technology live in real-time on the server.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:
Jean-Colas has over 25 years of experience in the animation industry and digital special effects. Over the past 3 years Jean-Colas has been working on the development of real-time software for the production of movie content. He is the founder of PocketStudio.

Previously to joining PocketStudio, Stephane worked on large C++ codebase, such as CGAL 3D mesh generation package or Lumiscaphe's software suite designed for photorealistic real-time rendering of CAD data.

Date: Friday, December 7th
Time: 5:45pm - 6:00pm
Venue: Hall C (4F, C Block)


Summary: High quality real-time CG character animation by BANDAI NAMCO Studios Inc. combining latest motion capture technology and game engine

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:

 

Back

/jp/attendees/doctoral-consortium /jp/attendees/production-gallery