OPEN HUMANOID TRAINING // FROM SIMULATION TO REALITY

WE TRAIN HUMANOIDS INSIDE THE MATRIX.

Glitch in the Matrix is a physical AI company that helps teams build humanoid software in simulation first, then transfer it into the real world. We use Glitchy, our open-source semi-humanoid based on OpenArm V2, to teach the full stack: simulation, VLMs, VLAs, World Models, synthetic data, evaluation, and deployment.

Robot Glitchy

Semi-humanoid based on OpenArm V2, designed for approachable and lower-risk training near people.

Software-first Simulation first

Most of the stack can be built without owning the robot, then transferred to hardware later.

Models VLM + VLA + World Models

Modern physical AI explained in a way operators can actually use.

Community Open source + events

We connect universities, startups, builders, and researchers around a shared open ecosystem.

WHO WE ARE

We help teams learn the path from simulation to real robots.

Most teams only have fragments of the stack: hardware without curriculum, AI without embodiment, or simulation without deployment. We connect the whole path, stay close to the open-source community, and make it teachable.

The mission

We help teams train humanoids in simulation, evaluate them safely, and transfer them into the physical world with a process they can actually understand and reuse.

  • Make the software stack usable even before a team owns the robot.
  • Teach the full path from simulation to hardware.
  • Use Glitchy to make the process safe, visible, and repeatable.
  • Turn advanced robotics concepts into a working curriculum and community.

Who it is for

We package the stack for universities, research labs, and startups that need to move fast without reinventing every lesson from zero.

We are also deeply connected to the ecosystem itself: organizing events, bringing people together, and helping open-source robotics communities grow.

Universities Startups Research labs Operator teams

From first principles to field deployment, one stack, one language, one operator manual.

MEET GLITCHY

Our open-source semi-humanoid platform.

Glitchy, open-source semi-humanoid based on OpenArm V2

Glitchy

Based on OpenArm V2

Semi-humanoid with compliant joints, designed for safe interaction around people. Built on the OpenArm open-source ecosystem: open hardware, ROS2 integration, teleoperation, and full simulation support.

  • Type Semi-humanoid
  • Platform OpenArm V2
  • Safety Compliant, human-friendly
  • Stack Open source, ROS2

FROM SIMULATION TO THE REAL WORLD

A practical sim-to-real pipeline, taught step by step.

01

Capture scenes and build simulation

Build environments, tasks, sensors, and scene representations for high-fidelity simulation.

02

Generate synthetic data

Use scene capture, domain randomization, and controlled simulation variation to create robust training data.

03

Teach perception and action

Use VLMs, VLAs, and policy learning so the humanoid can map intent into movement.

04

Model the future

Use World Models to predict outcomes, compare futures, and improve data efficiency.

05

Exit into reality

Validate safety, tune the final gap, and transfer to real hardware with confidence.

THE STACK

We explain the models and tools behind modern physical AI.

We explain each layer in plain language, then connect it to the exact tools used in modern robotics training.

VLMs

Vision-Language Models

These models turn images and instructions into grounded understanding of scenes, objects, and intent.

VLAs

Vision-Language-Action

These models extend perception into action so the robot can convert goals, visual context, and language into motor behavior.

World Models

Predictive internal simulators

These models help the robot imagine future trajectories, compare outcomes, and plan with fewer real-world failures.

Simulation, scene capture, and deployment tools

We focus on the core tools that matter in practice: Isaac Sim, Genesis, MuJoCo, and ROS2. Around that stack, we explain how scene capture, NeRFs, Gaussian splatting, and domain randomization fit into synthetic data generation and sim-to-real transfer.

Isaac Sim Genesis MuJoCo ROS2

CURRICULUM

A complete learning path for universities and startups.

Curriculum modules

01
Physical AI foundations

Frames, sensors, datasets, control loops, and evaluation baselines.

02
Simulation, scene capture, and synthetic data

Scene setup, NeRFs, Gaussian splatting, task design, and domain randomization.

03
VLMs, VLAs, and World Models

How modern model families fit together in real robotics systems.

04
Training and evaluation

Policy learning, benchmarking, failure analysis, and safety gates.

05
Sim-to-real transfer

Deployment, calibration, supervision, and post-transfer iteration.

What teams leave with

  • A shared language for physical AI.
  • A software-first workflow that does not require owning the robot on day one.
  • A usable roadmap from simulation to hardware.
  • Hands-on understanding of the modern robotics model stack.
  • A repeatable curriculum that can live inside a university or startup.

The goal is not to watch robots on stage. The goal is to train operators who can ship them.

WHY TEAMS WORK WITH US

Open robot, open pipeline, and a practical way to learn.

Open by default

We stay close to open-source robotics and expose the full training loop, from simulation and data generation to policy evaluation and deployment.

Simulation first

Teams can start building most of the stack in simulation. The robot becomes the next step, not the entry ticket.

Community and ecosystem

We organize events, connect startups and researchers, and help grow a more collaborative ecosystem around open-source humanoid robotics.