By Whitney Heins
Lynne Parker sits in her office frantically working to complete a multimillion-dollar research grant proposal as the deadline looms. Back at home, her dinner is being prepared, furniture is being dusted, and laundry is being washed.
But Parker doesn’t have a maid. She has a robot, similar to the aproned, duster-donning Rosie from the popular cartoon The Jetsons. Is she worried that a red shirt might be mixed in with a load of white socks? Not really, because like Rosie, Parker’s robot is capable of making decisions as it performs various household chores.
Unfortunately, this is only a daydream. But Parker believes it’s only a matter of time until it becomes a reality. “Rosie is the direction that we are going. The world is not there yet, but we are making progress.”
Parker is an expert in distributed autonomous robotic systems, where robots can make their own choices and work with other robots, sensors, software, or people. Through her eyes, the world is filled with tasks these sophisticated robots could do more effectively than people. For example, they could collaborate to clean up hazardous materials or stock a warehouse—missions much more complicated than just performing a single function on an assembly line.
Building a Better Brain

As a professor of computer science at UT, Parker’s research is focused on a new generation of machines that can move around, work together, and think for themselves. She is tapping into the field of artificial intelligence (AI) to design an advanced robot “brain.”
“I was drawn to AI because I wondered how we are so smart and figure things out quickly, yet our brains are slow compared to typical computer speeds,” says Parker, who studied brain and cognitive science at MIT. “How is it that our brain is organized so that we can make sense of things, and how do you recreate that so robots can do helpful and intelligent things?”
Parker is using an AI technique called “machine learning” that enables devices to make choices without being explicitly told what to do. Instead, the machine relies on sets of step-by-step problem-solving procedures, or algorithms, to recognize and learn patterns. These algorithms act as a toolkit for Parker, who can add, subtract, and adjust them to meet her research needs.
“Robots often don’t have all the information they need due to limited bandwidth,” Parker says. “Therefore, they cannot know all possible situations that will be encountered and how to respond. We have to work to develop their common sense.”
Practice Makes Perfect
The inside of Parker’s Distributed Intelligence Laboratory feels much like a sci-fi movie set. Stony, vacant, unblinking eyes line the walls—each pair belonging to an impressionable robot body.
It is here, with the assistance of doctoral students Hao Zhang and Chris Reardon, that Parker is preparing a robot capable of teaching adults with learning disabilities life skills, such as setting the table or doing laundry.
“This robot has the ability to make people more self-sufficient by teaching them, rather than making them dependent on a robot for life,” Parker says. It could also be a lifelong helper for those tasks that disabled adults, the elderly, and others cannot learn for themselves. The technology is on track to being more affordable than human caregivers.

The key to teaching a robot to respond correctly is exposing it to every situation imaginable. Like a parent teaching a child, Parker instructs the robot to do the right thing by repeating the action over and over again.
This lengthy process involves running multiple variations of the experiment, collecting data, performing statistical analyses, and then assigning probabilities to different responses to the different circumstances. The probability classifications guide the robot to make the best decisions based on incomplete information.
“It is really exciting when you run a lot of tests and you see a robot make the right choice,” she says.
Situational Awareness
Another element critical to effective machine learning is proper representation of the problem. This includes the robot’s actual position and environment. For example, if a robot is teaching a person how to set the table, it must be at the correct height and distance away from the table and person, and in the correct home environment.
“For robotics, three-dimensional supercomputer models do not suffice,” Parker says. “Experiments need to be done in real time with real conditions.”
And they are. Repeatedly.
Parker’s ultimate goal is for robots to have a semblance of common sense. “Being able to deal with an unexpected event is really hard, even for people, but we manage. And so do animals,” she says.
This leads Parker to believe robots can, too. And they will learn soon enough.
In the not-to-distant future, Parker will take her trained robots out of the laboratory and into practical settings where people can directly benefit from their capabilities.
“Computers changed the way our society works,” she says. “They are everywhere, and I think it will get to a point when robots are ubiquitous, too. I think our research has us on a definite path to improve quality of life.”
Looking twenty years ahead, Parker envisions owning a housekeeping robot that will cook, clean, and do laundry while she races against the clock to complete another proposal. It might even be named Rosie.