More Control, Less Connection: How User Control Affects Robot Social Agency
The rapid advancement of AI has created more capable robots that are now appearing throughout our homes, workplaces, and everyday environments. These robot assistants typically come with manufacturer-defined default settings, but as with any consumer product, one-size-fits-all rarely satisfies everyone. For robots to gain widespread acceptance in personal spaces like our homes, they need to be customizable to individual preferences. However, this customization presents an interesting dilemma.
Many cutting-edge AI-powered robots are marketed as autonomous, intelligent entities capable of natural language interaction. This led Ph.D. Candidate Alex Wuqi Zhang to wonder: might allowing users to customize or program a robot’s behavior inadvertently undermine the very perception of social agency and autonomy that makes these intelligent products appealing in the first place?
Zhang first got the idea for this research from a scene in the movie Interstellar. “One very specific moment that kindled this idea was when Cooper, played by Matthew McConaughey, was adjusting the humor and honesty settings of his robot TARS,” Zhang said. “I found it jarring to adjust the personality/behavior of a robot that had such a strong social presence.”
As a researcher passionate about human-centered technology, Zhang found the perfect environment to explore this question in Assistant Professor Sarah Sebo’s Human-Robot Interaction (HRI) lab at the University of Chicago’s Department of Computer Science. There, he designed a study to examine how different levels of customization affect our perception of robots as social beings. His paper, Balancing User Control and Perceived Robot Social Agency through the Design of End-User Robot Programming Interfaces, presented at the 2025 ACM/IEEE International Conference on Human-Robot Interaction, revealed a clear pattern. When users could program granular details of a robot’s behavior, they began to view it less as an autonomous social agent and more as a preprogrammed tool. This insight earned the paper a Best Paper nomination and highlighted a key tension in robot design.
For his experiment, Zhang developed a prototype using Hello Robot’s Stretch platform, transforming it into an at-home robot butler. He also engineered three different end-user programming interfaces that varied in the level of customization it allowed: a high-granularity interface allowing detailed customization of every behavior, a low-granularity option offering simple macro adjustments through sliders and buttons, and a control condition with no customization options.
“The engineering behind building this prototype of a robot butler that could handle all these different tasks at once was like conducting a small orchestra,” Zhang explained. “We had to coordinate everything from how it moves and talks to how it sees the world around it.”
The study took place in a conference room thoughtfully transformed into a living space, where participants engaged with the robot in everyday scenarios. They handed over their bags upon entering, engaged in casual conversation about their day, and requested snacks from a nearby selection. Each participant experienced one of the three interface conditions, allowing researchers to observe how different levels of customization influenced their perception of the robot as a social entity.
The significance of this research extends beyond academic interest. As Zhang points out, the customization of AI tools and robots has become increasingly central to product design, yet the research community has largely overlooked how different levels of customization might affect the fundamental way users perceive these technologies.
“We know customization can better align products with user preferences,” Zhang reflected, “but what might we inadvertently damage in the product experience ecosystem? Understanding these trade-offs is crucial for the overall user experience.” He thinks that for many future robot applications on the horizon, maintaining social capabilities could play a crucial part in the long-term acceptance of these robot products, making perceived social agency an aspect designers may want to consider.
The research raises important questions for companies developing the next generation of AI companions and assistants: How much control should users have? Where is the balance between customization and maintaining the perception of autonomous intelligence?
Building on these findings, Zhang has already begun exploring new territory with a follow-up study investigating multi-modal inputs for guiding robot behavior. This continued research aims to further refine our understanding of human-robot relationships in an increasingly automated world.
To learn more about this and other practical work from the HRI lab, visit their research page here.