Sensorium Ex
2025 NYC.
A high-profile collaborative opera project featuring nonverbal actors, powered by open-source AI, gesture sensors, and volumetric capture. Supported by composer Paola Prestini, VisionIntoArt, Beth Morrison Projects, and NYU Ability Lab.
My role focused on 3D printing & sensor prototyping and assisting the Evercoast volumetric capture pipeline integration. The system enables expressive voice generation via neural vocoder algorithms driven by sensor gestures—making nonverbal performance accessible.
The NPR feature “With help from AI, nonverbal actors star in ‘Sensorium Ex’ opera” underscores the project’s public recognition and mission of inclusive tech.
🔗 See more testing on Instagram