At MIT Hard Mode 2026, a six-person team built an AI system that can temporarily move your hand for you. Called Human Operator, it combines a vision-language model, voice input, and electrical muscle stimulation – a technology that sends small currents through the skin to contract specific muscles – to physically guide a user’s hand and wrist through unfamiliar movements.

The team describes it as a “human augmentation tool” designed to help people learn or perform actions they couldn’t manage on their own. Rather than displaying instructions or providing feedback after the fact, the system intervenes in the moment, nudging the body directly toward the right motion.

Peter He, Ashley Neall, Valdemar Danry, Daniel Kaijzer, Yutong Wu, and Sean Lewis built the project and took first place in the “Learn Track.” The Hard Mode hackathon runs for 48 hours at MIT Media Lab, focused on intelligent physical systems that can sense, adapt, and respond to people in real time.

How the Human Operator works

What the team built is essentially a careful assembly of technologies that already existed, just never quite in this combination. A camera captures what the user sees. Voice input runs through Anthropic’s Claude API, which figures out what motion is needed and maps it to a sequence of muscle commands.

Those commands then travel through an Arduino-based hardware stack to EMS electrodes on the wrist and fingers.

In the demo footage, a hand waves back at someone, fingers find the right keys for a melody, and a fist curls into an OK sign. Each one is guided by the system, reading the situation and deciding it was time to move.

The engineering stack reflects a deliberate choice about where AI sits in the interaction. Most consumer AI systems stop at text, voice, or screen output. Human Operator goes a layer deeper, into motion itself.

The system tries to move with the body rather than surface instructions, which is an approach the team frames around helping users “learn and do things you normally cannot do.” Where that leads in terms of physical learning, accessibility, or new kinds of interfaces, however, is an open question.

The human-computer integration

The team wasn’t working in a vacuum. Their repository credits the Human Computer Integration (HCI) Lab at the University of Chicago and draws on prior research into neuromuscular interfaces, electrode placement, and generative muscle stimulation – a body of work that has been quietly building for years at the edges of HCI and embodied AI.

Human Operator lands somewhere in that lineage: a hackathon prototype, yes, but one with serious intellectual roots.

The project may not be a finished product, but as a demonstration of where things could go, it’s hard to ignore. The system moves off the screen and onto the body, which puts it in genuinely unfamiliar territory for consumer AI.