Zoe Ziyun Xiao
Home          About          Contact
︎


Sophia - Expressive Robotic Motion Systems


Constraint-aware expressive behavior design for social robotics.



Project Overview: 

This work explores how micro-movements and timing variations can communicate internal cognitive states while remaining aligned with hardware constraints and real-time execution environments.



Expressive Processing States


Reactive Emotional Responses
Rapid Aversion Reflex

Reactive Emotional Responses
Layered Rejection Response

Cognitive Processing States

Evaluative Micro-Acknowledgment

Cognitive Processing States
Reflective Consideration State
Physiological State Transitions
Low-Energy Idle Drift
Physiological State Transitions
Activation Recovery Transition








Motion Stability & Constraint Correction

Refining motion capture data for actuator-safe robotic execution.





Raw motion capture streams frequently introduce signal noise, rotational discontinuities, and actuator-incompatible joint ranges.

This refinement study focuses on:

• Stabilizing micro-jitter
• Resolving joint flipping
• Recalibrating rotation hierarchies
• Aligning motion output with hardware-safe execution thresholds

The resulting data ensures kinematic continuity and downstream deployment readiness.








AI-driven Facial Prototyping

Rapid behavior iteration within virtual simulation environments prior to physical deployment.




Overview:
This exploration investigates AI-assisted facial animation workflows using NVIDIA Omniverse and Audio2Face to accelerate expressive behavior prototyping in a robotics pipeline.


Core Focus:

Audio-to-Facial Mapping
Neural audio encoders and blendshape solvers translating speech signals into controllable facial parameters.

USD Iterative Workflow
Layer-based scene composition and non-destructive variant switching within Omniverse’s USD ecosystem.

Deployment Readiness Assessment
Evaluated motion stability, expressivity control, and actuator-safe behavior prior to hardware testing.









Immersive Virtual Environment Integration

AI-generated virtual environments composed with alpha-channel facial animation to prototype meditative interaction contexts.

Overview:
Generated ambient therapeutic environments (bamboo forest, interior meditation space) using generative video tools and composited with facial animation outputs to simulate guided meditation sessions.







Core Components:

Generative Environment Prototyping
Rapid atmospheric scene generation for behavioral context testing.

AI-Assisted Sound Design
Procedurally composed ambient soundscapes aligned with emotional pacing.

Alpha-based Compositing Pipeline
Integrated transparent facial animation renders into dynamic audiovisual environments.

Emotion-Scene Synchronization
Matched visual rhythm, lighting tone, and audio cadence with facial expressivity.










Public Activation
Live exhibition contexts, validating expressive clarity and audience perception in real-world settings.