UChicago CS Collaboration Hopes to Improve Communication Between Humans and Smart Devices

February 04, 2019

Smart home devices such as thermostats, lights, window shades, and virtual assistants promise a Jetsons-like future where automation and machine intelligence perform routine tasks with little or no human input. But for now, these technologies can also be frustratingly obtuse, performing unexpectedly, repetitively, or not at all. When faced with lights coming on in the middle of the night, blinds that go up and down every 30 seconds, or windows that remain open despite rain, many users might question just how smart these devices really are.

A new project from three UChicago CS researchers seeks to address these headaches by zeroing in on today’s most common route of communication between humans and their smart devices: trigger-action programming. Many of these devices are controlled through an interface where users specify an if-then relationship, such as “IF it’s sunny outside, THEN close the blinds.” But this seemingly simple process can quickly grow complex and bug-ridden when if-then rules pile up, conflict, or mislead, creating a disconnect between user and device.

With a new National Science Foundation grant, Blase Ur, Shan Lu and Ravi Chugh of UChicago CS, with Michael Littman of Brown University, aim to study and improve upon current models of trigger-action programming. Through a unique combination of expertise in human-computer interaction, programming languages, systems, and machine learning, the team will build an interactive interface that helps users communicate with their devices and vice versa, hopefully creating a new generation of truly smart home technology.

“Right now, smart devices are actually internet-connected devices more so than things that have any type of useful intelligence,” said Ur, Neubauer Family Assistant Professor at UChicago CS. “We want to be able to provide you the support to build a mutual understanding between a user’s intent and what the machine believes it should be doing.”

The collaboration grew from a 2014 study by Ur, Littman, and two co-authors studying how quickly and effectively people learn to use trigger-action commands. The results were encouraging; even users with no prior programming experience were able to create if-then rules to control smart home devices. However, subsequent user studies found an assortment of common trigger-action programming bugs that can appear when rules interact or behave in unexpected ways.

For example, a user could create a rule that says “IF I come within 1 mile of a pizza shop THEN order a pizza,” but then mistakenly order several pizzas if they cross over the one-mile boundary multiple times — a kind of “repeated triggering” bug. Or subtle wording differences, such as “IF the garage door opens WHILE it is raining, THEN close the garage door” versus “IF it starts raining WHILE the garage door is open, THEN close the garage door” can cause a mismatch between what the user wants and what the machine hears.

A discussion between the eventual members of the team at one of the biannual meetings of UChicago’s CERES Center “got us thinking more broadly, how do people communicate their intent to systems?” Ur said. “The very broad goal is that people should be able to communicate to their collection of devices and services what they want, and that this whole communication from person to machine will be pretty friction-less.”

The new project will combine further user studies with formal methods that map potential trigger-action bugs and potential solutions, which can then be suggested to the user. If a user mistakenly sets up conflicting or inexact rules that produce a likely undesirable outcome, such as leaving windows open during a thunderstorm, the interface could warn the user and help them create a better rule. Alternatively, the proposed interface could move away from if-then formulations, instead asking the user what they want and then automatically back-filling the rules that produce that result.

“Perhaps you don't have to say when to close the window, you can just describe some state that you desire, and let the system figure out what to do and when to do it to satisfy your intent,” said Lu, an Associate Professor at UChicago CS.

The researchers will also examine whether machine learning can help users get the most out of their smart devices by analyzing behavior and suggesting new rules. If a homeowner always manually turns on their heat when the room temperature drops to 67 degrees, or repeatedly overrides when smart window blinds close in the morning, the interface can learn from those actions and propose new rules that automate them in the future. Here, communication between machine and human is also important — imagine a phone notification that says “you turned on your office lights every day at sunset, would you like me to do that for you?”

Once complete, the researchers hope that their interface will further lower the barriers for effective communication with connected devices and services, expanding the potential of automation beyond highly technical areas of computer and data science.

“At a high level, the general goal is to program without programming, what's called program synthesis,” said Chugh, an Assistant Professor at UChicago CS. “This is a cool domain where it seems like not only can program synthesis be effective, but also where it can be built into a system that can actually be used by people. It's a really interesting blend of formal methods and human-facing challenges.”