Skip to main content

Cyberwave Research

Where embodied AI meets production reality

Cyberwave Research builds the science that turns frontier physical AI — robot foundation models, VLA policies, multi-robot orchestration, shared autonomy — into systems you can actually run on a factory floor, a logistics yard, an inspection route, a hospital, a power plant, or a forward defense site.

Our research thesis

Embodied intelligence is having its GPT moment. Foundation models can reason about physical tasks; VLA policies can act on them. The bottleneck has moved.

The frontier isn't just bigger models. It's the scaffolding around them — twins, fleets, safety envelopes, teleop pipelines, deterministic runtimes — that decides whether a clever policy becomes a clever demo or a system that runs 24/7 across factories, depots, ports, hospitals, and defense sites.

Cyberwave Research lives at that interface. We work on the models and we work on the production stack underneath them — because in physical AI you can't separate the two.

Research focus areas

Nine workstreams, one stack. Each one ships into the platform.

Embodied intelligence

Robot Foundation Models (RFM)

We train and fine-tune large embodied policies that generalize across morphologies — arms, mobile bases, drones, humanoids — and across tasks. Our recipe blends imitation from teleoperation, RL in simulation, and continual learning from production telemetry, with safety constraints baked into the loss.

Multimodal control

Vision-Language-Action (VLA) Policies

VLA models that turn natural-language goals like “inspect aisle seven” or “pick the damaged crate on pallet B” into grounded, low-latency action streams. We focus on the hard parts: spatial grounding, long-horizon planning, recovery behaviors, and predictable performance under degraded sensing.

Fleet intelligence

Multi-Robot Orchestration

Heterogeneous fleets working together — UGVs, manipulators, drones, humans — across labs, factories, ports, and outdoor sites. We research scheduling, conflict resolution, shared perception, and resilience patterns that hold up when one node degrades or drops out.

Shared autonomy

Human-Robot Interaction

Operators rarely want full autonomy or full teleop — they want a dial. We build shared-autonomy interfaces, intent prediction, conversational supervision, and gesture-based hand-offs so a single person can safely supervise dozens of robots without losing situational awareness.

Closing the reality gap

Sim-to-Real & Digital Twins

High-fidelity twins, domain randomization, photoreal sensor models, and hardware-in-the-loop rigs let us promote policies from simulation to production with measurable, statistical confidence — not vibes. Every regression suite runs in sim before any real motor moves.

Real-time systems

Edge Autonomy & Deterministic Compute

Foundation-scale intelligence is useless if the control loop misses its deadline. We research model distillation, adaptive scheduling, mixed-precision inference, and deterministic runtimes so VLA-class policies can run at millisecond cadence on heterogeneous edge hardware.

Grounding intelligence in reality

Perception & Sensor Fusion

Vision, LiDAR, depth, thermal, acoustic, IMU, industrial telemetry — fused into a single, queryable world model. Robust under occlusion, weather, glare, vibration, and the messy reality of factory floors and outdoor sites.

Trust as a primitive

Safety, Assurance & Governance

Runtime safety envelopes, formal monitors, override paths in milliseconds, and audit trails by default. We build the verification tooling that lets embodied AI ship into regulated environments — manufacturing, energy, healthcare, defense — and stay there.

Closing the data loop

Continual Learning from Fleets

Every deployment generates teleop traces, intervention events, and edge-case telemetry. We research how to safely curate, label, and re-train from this data — turning every robot in production into a contributor to the next model.

What ships out of the lab

Research at Cyberwave is judged by what reaches production — not what reaches a paper.

  • Production-grade VLA policies for inspection, pick-and-place, and navigation, served from the edge with deterministic latency budgets.
  • Open digital twin recipes and sim-to-real regression tooling shared with the ecosystem so others can promote policies safely.
  • Safety cases and runtime monitors aligned with IEC 61508 and ISO 10218 so embodied AI can enter regulated environments and stay there.
  • Shared-autonomy and teleop loops battle-tested across defense, logistics, energy, manufacturing, aerospace, and maritime deployments.
  • Academic collaborations on embodied LLMs, tactile sensing, multi-agent coordination, and continual learning from real fleets.

Bring your hardest physical-AI problem

Whether you're a research lab, a robot maker, or an operator with a stubborn autonomy problem — we want to work on it. We'll meet you in simulation first, then on site.