Imagine creating shapes, animating scenes, or telling interactive stories, simply by arranging a group of tabletop robots with your hands. What if you could give voice instructions without a single line of code? With this intuitive, hands-on approach, the process of programming complex behaviors becomes accessible to anyone, regardless of their technical background. Users can physically arrange the robots on a tabletop to form anything from animals to geometric figures and then describe their desired actions using natural language, such as “make the dog wag its tail” or “move the shape in a circle”.

best demo award acceptanceIn their newly published paper, Shape n’ Swarm: Hands-on, Shape-aware Generative Authoring for Swarm UI using LLMs, a team from the Actuated Experience Lab (AxLab) led by Assistant Professor Ken Nakagaki at the University of Chicago Department of Computer Science presented a proof-of-concept tool, called Shape n’ Swarm, that accomplishes exactly that. Led by undergraduate student Matthew Jeung ‘26, this work was presented at the 2025 ACM Symposium on User Interface Software and Technology (UIST), a globally recognized conference for advances in interface research, where it won the Best Demo Award.

Shape n’ Swarm was inspired by longstanding visions in Human-Computer Interaction (HCI), and robotics of a responsive, shape-changing material, introduced in works like Radical Atoms and Claytronics, two decades ago. These papers envisioned a clay-like material responsive to tangible user inputs, founding the vision of an intuitive and fluid form of interaction, while past research focused more on reconfigurable hardware. Shape n’ Swarm is one of the first steps to make such hardware be aware of user’s manipulation of the system to flexibly reconfigure in an interactive manner.

Shape n’ Swarm introduces a novel way to tangibly program a swarm of tabletop robots. It is the first tool to utilize shape manipulation as an input, combined with speech, to operate like a large language model (LLM). The system allows users to hand-arrange a group of tabletop robots, called swarms, into specific configurations and then use natural, spoken commands, such as “make it walk forward” or “when I press here, wag the tail”, to generate complex animations and interactions. These dual inputs are processed by a multi-agent LLM architecture that interprets the shape, generates corresponding animations or interactions as Python scripts, and orchestrates the robots’ actions through a set of custom APIs.

Jeung and the team showed that even those with little or no programming background could quickly prototype robots to perform tasks ranging from acting out mathematical functions and geometric shapes to controlling remote objects, all through embodied design and natural dialogue. Participants described the process as “very easy to learn”, enabling participants to think through tasks and refine their thoughts into specific ideas.

“Matthew was the core driving force behind the project. The original idea was developed by him, while I assisted in shaping it into a research paper. With support from other CS undergraduate students, Steven and Luke, as well as strong academic and technical guidance from graduate students Anup and Michael (all listed as co-authors), he actively built the interactive system, adapted it through multiple LLM model updates, conducted the entire user study, and led the paper writing. It has been a pleasure to see Matthew grow through this project over the past two years.”

Looking ahead, the project revealed some key limitations in LLM transparency and the scalability of large robot swarms, which prompt future research into manual user controls and alternative feedback mechanisms to further refine the Shape n’ Swarm tool. Despite these challenges, the “shape-aware generative authoring” method introduces a more intuitive paradigm for designing, learning, and creating with actuated interfaces. This work helps reimagine a future where complex robotic systems can be programmed by anyone, even those without any coding background.

“Shape n’ Swarm demonstrates the potential of combining hand-shaping and speech as a means of control; we hope future research will investigate this more broadly,” Jeung said. “This is just the first step towards an exciting future of ‘intent-aware’ tangible interaction. We believe our approach of shaping and speech is highly generalizable to other forms, whether for other types of robots, drones, shape-shifting devices, and more.”

To learn more about the Shape n’ Swarm project, visit the project page here.

Related News

More UChicago CS stories from this research area.
Inside the Lab icon
Video

Inside The Lab: How Can Robots Improve Our Lives?

Oct 27, 2025
gas example
UChicago CS News

Redirecting Hands in Virtual Reality With Galvanic Vestibular Stimulation: UChicago Lab to Present First-of-Its-Kind Work at UIST 2025

Oct 13, 2025
UIST collage
UChicago CS News

UChicago CS Researchers Expand the Boundaries of Interface Technology at UIST 2025

Sep 26, 2025
child reading to robot
UChicago CS News

Could Robots Help Kids Conquer Reading Anxiety? New Study from the Department of Computer Science at UChicago Suggests So

Sep 10, 2025
UChicago CS News

Hands-On Vision: How a Wrist Camera Can Expand the World for All Users

May 23, 2025
robot interaction
In the News

More Control, Less Connection: How User Control Affects Robot Social Agency

May 16, 2025
collage of photos from conference
UChicago CS News

Innovation at the Forefront: UChicago CS Researchers Make Significant Contributions to CHI 2025

Apr 23, 2025
Pedro giving speech
UChicago CS News

Pedro Lopes Honored with 2025 IEEE VGTC Virtual Reality Significant New Researcher Award

Mar 13, 2025
UChicago CS News

Sarah Sebo Awarded Prestigious CAREER Grant for Research on Robot Social Skills in Collaborative Learning

Jul 29, 2024
UChicago CS News

Enhancing Multitasking Efficiency: The Role of Muscle Stimulation in Reducing Mental Workload

Jul 10, 2024
UChicago CS News

Unveiling Attention Receipts: Tangible Reflections on Digital Consumption

May 15, 2024
UChicago CS News

University of Chicago Computer Science Researchers To Present Ten Papers at CHI 2024

May 06, 2024
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube