Folding a towel or a T-shirt is kind of a mindless, simple chore, unless you’re a robot. Then it’s still mindless, but not so simple.

Commercial robotic devices can manipulate identically shaped objects — flawlessly fitting together parts in a car assembly line, for example. But they can’t deal with novelty. 

A more useful — and ambitious — robot could encounter objects with flexible shapes, yet still determine what it’s dealing with. Such a robot could take on an array of disarray: It could pick up each article from a pile of towels and clothes, figure out its shape and fold it.

Pieter Abbeel, an assistant professor of electrical engineering and computer sciences at UC Berkeley, and his students have now provided a human-sized robot with these skills — part of Abbeel’s long-term effort to greatly expand the robotic repertoire.

Robot programmers create thousands of computer instructions, called lines of code, to get their metal servants to perform correctly. Abbeel and his students developed programs that enable their robot to eliminate one possibility after another until it reaches a single inescapable conclusion: the exact shape of the cloth object it’s holding. Then it can finally get down to the business of folding.  

The lab first tried programming the robot to recognize the geometry of the piece of clothing when it is holding it up. They mounted two high-resolution cameras on the robot – its “eyes” — to produce images in which the micro-texture of the towel could be observed. 

For each pixel the robot imaged, the program directed it to find the corresponding spot in a second image taken from a different viewpoint. This allowed the robot to map out the towel’s 3-D configuration. With that data, it could figure out where the mystery object’s corners were — the first step in starting to manipulate it.

They succeeded, but both the programmers and the robot had to work too hard.

“It was very hard computationally,” Abbeel says. “Matching all pixels across two images would take maybe two to three seconds, but you need to look at many different viewpoints, so it would take maybe five minutes before it could identify a corner and then it would run through whole the process all over again to find a second corner.”

Abbeel figured there must be a better way. His team developed an approach that allowed the robot to figure out what article it is holding, and where it is holding it, using much simpler and faster visual processing. 

Rather than mapping out the article’s entire 3-D configuration, the new strategy requires the robot to extract only two pieces of information from the images: the lowest point on the article when it’s being held up by one gripper, and the outline of the article in the image when it’s being helped up by two grippers. 

“Since we also provide the robot with an internal model of how cloth will move or hang when being held up, it can figure out what it’s holding with just these two pieces of information.”

The robot starts out with a very large number of hypotheses — one for each possible clothing article and each possible grasp point on that article. Then it grasps and re-grasps the article hundreds of times, holding it up and taking its image each time. As it repeats this process, the number of hypotheses consistent with the observed heights and contours quickly shrinks, and at some point it reaches a conclusion, like “Now I know I’m holding article type C and grasping it at points 36 and 75.” A witness can’t see a “Eureka moment,” but eventually, the metal homemaker switches to folding mode. 

Trying to get a robot to take on kitchen chores is fascinating in and of itself, Abbeel says, but he’s also carrying out the research to learn how to build intelligent systems that can perform far more complex jobs. He is in the very early stages of conceiving a surgical robot that could take on routine tasks for a surgeon, such as tying a knot, and allowing the expert to focus on more critical aspects of the surgery.

He is collaborating with heart surgeon Douglas Boyd at UC Davis to identify the most useful contributions a robotic device could make in the surgical setting. Abbeel, Boyd and two other UC faculty scientists have presented a proposal to UC’s Center for Information Technology Research in the Interest of Society (CITRIS) for a proof-of-concept project to develop robot-assisted telesurgery, enabling a surgeon to direct a robotic surgical device remotely. Telesurgery might be used to perform fairly routine but urgent procedures when a surgeon can't get to the hospital in time.

CITRIS is one of UC’s four California Institutes for Science and Innovation, conceived to encourage collaborations between UC researchers in different disciplines and different campuses, and between UC scientists and industry.

Abbeel credits CITRIS with launching his early-stage collaboration with cardiosurgeon Boyd. “We met at a CITRIS health care workshop that brought together scientists with different interests and skills. We decided to work together so we could develop applications that are useful in the most critical surgical areas.”

So, will robots eventually take away our jobs and leave us all listless?

“Well, of course, they’ve already replaced some assembly-line jobs, but I think people will still be doing 90 percent of what they are already doing — for work and after work,” said Abbeel. “Will people lie on the beach all day if a robot is doing their house chores? Who knows? Maybe they’ll have time to do more of the things they want, like gardening or cooking, or cycling, or maybe developing new kinds of robots.”