profile photo
click to view gsplats

Cyrus Vachha

Hi, I'm a 2nd year PhD student at Princeton University researching VR/AR, HCI, and Computer Graphics advised by Parastoo Abtahi. I completed my undergrad and masters at UC Berkeley where I worked on projects related to VR/AR and interfaces for generative models and neural rendering, where I was advised by Björn Hartmann. In the past, I've contributed to open source projects including Nerfstudio and I've interned at Microsoft Research.

Email  /  CV  /  LinkedIn  /  Twitter  /  YouTube

Research

I'm interested in Virtual Reality/Augmented Reality, human computer interaction, computer graphics, and computer vision. Most of my work is at the intersections of graphics and HCI including creativity tools. More recently, I've been working on interfaces and systems combining radiance fields and generative models for world building or learning and perceiving a real-world space through 3D scene understanding for personalized situated world modeling.

Dreamcrafter: Immersive Editing of 3D Radiance Fields Through Flexible, Generative Inputs and Outputs
Cyrus Vachha, Yixiao Kang, Zachary Dive, Ashwat Chidambaram, Anik Gupta, Eunice Jun, Björn Hartmann
CHI, 2025 (Paper) ; UIST 2024 (Poster)

A VR-based world building tool for editing radiance field scenes using generative AI in an immersive, real-time interface through a modular architecture that combines direct manipulation with different levels of abstraction and controllability interfacing with generative image and 3D models, offering workflows for creating and modifying 3D Gaussian Splatting scenes and staging scenes for video generative models in real-time using proxy representations.

Instruct-GS2GS: Editing 3D Gaussian Splatting Scenes with Instructions
Cyrus Vachha, Ayaan Haque
2024

We propose a method for editing 3DGS scenes with text-instructions in a method similar to Instruct-NeRF2NeRF. Given a 3DGS scene of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to accomplish more realistic, targeted edits than prior work.

Creating Visual Effects with Neural Radiance Fields
Cyrus Vachha
arXiv 2023 (abstract)

We present a pipeline for integrating NeRFs into traditional compositing VFX pipelines using Nerfstudio, an open-source framework for training and rendering NeRFs. Our approach involves using Blender to align camera paths and composite NeRF renders with meshes and other NeRFs, allowing for seamless integration of NeRFs into traditional VFX pipelines. Shown in the CVPR 2023 Art Gallery. View more NeRF VFX renders (here)

StreamFunnel: Facilitating Communication Between a VR Streamer and Many Spectators
Haohua Lyu*, Cyrus Vachha*, Qianyi Chen*, Balasaravanan Thoravi Kumaravel, Björn Hartmann
arXiv, 2023

In this work, we identify problems associated with interaction carried out with large groups of users. To address this, we introduce an additional user role: the co-host. They mediate communications between the VR user and many spectators. To facilitate this mediation, we present StreamFunnel, which allows the co-host to be part of the VR application's space and interact with it.

WebTransceiVR: Asymmetrical Communication Between Multiple VR and Non-VR Users Online
Haohua Lyu*, Cyrus Vachha*, Qianyi Chen*, Odysseus Pyrinis, Avery Liou, Balasaravanan Thoravi Kumaravel, Björn Hartmann
CHI, 2022 (LBW)

We propose WebTransceiVR, an asymmetric collaboration toolkit which when integrated into a VR application, allows multiple non-VR users to share the virtual space of the VR user. It allows external users to enter and be part of the VR application's space through standard web browsers on mobile and computers. WebTransceiVR also includes a cloud-based streaming solution that enables many passive spectators to view the scene through any of the active cameras. We conduct informal user testing to gain additional insights for future work.

Extra

SplatXR: Unity toolkit for 3D Gaussian Splatting XR Applications
[In progress, expected release early 2026. Contact for earlier access.]

Unity toolkit for on-device 3D Gaussian Splatting rendering and interactions design for AR and VR applications compatible with standalone devices including Meta Quest 3 and Apple Vision Pro. We introduce mesh-augmented splats to enable physics and lighting interactions with splats by auto-generating a mesh from the input splat.

University Datasets for 3DGS and NeRFs
[In progress, expected release early 2026. Contact for earlier access.]

Datasets for radiance field training (3DGS and NeRFs) of dozens of captures primarily of buildings of UC Berkeley and Princeton University, including exteriors and interiors. These datasets are composed of captures of videos (iPhone), images (captured via Sony a7iii), and fisheye images (Insta360 X4 and Samsung Gear360). Online VR splat gallery coming soon.

Nerfstudio Contributions

Since Jan 2023, I have been contributing features to the Nerfstudio system including the Blender VFX add-on and VR180/Omnidirectional (VR 360) video/image render outputs.

Additional Projects

Various projects and explorations from over the years in VR/AR, 3D, and more.


View Legacy Research Page

Website adapted from Jon Barron's source code