QUANTA

Tuesday, July 12, 2011


In Search of a Robot More Like Us

By JOHN MARKOFF

MENLO PARK, Calif. — The robotics pioneer Rodney Brooks often begins speeches by reaching into his pocket, fiddling with some loose change, finding a quarter, pulling it out and twirling it in his fingers.

The task requires hardly any thought. But as Dr. Brooks points out, training a robot to do it is a vastly harder problem for artificial intelligence researchers than I.B.M.’s celebrated victory on “Jeopardy!” this year with a robot named Watson.

Although robots have made great strides in manufacturing, where tasks are repetitive, they are still no match for humans, who can grasp things and move about effortlessly in the physical world.

Designing a robot to mimic the basic capabilities of motion and perception would be revolutionary, researchers say, with applications stretching from care for the elderly to returning overseas manufacturing operations to the United States (albeit with fewer workers).

Yet the challenges remain immense, far higher than artificial intelligence hurdles like speaking and hearing.

“All these problems where you want to duplicate something biology does, such as perception, touch, planning or grasping, turn out to be hard in fundamental ways,” said Gary Bradski, a vision specialist at Willow Garage, a robot development company based here in Silicon Valley.

“It’s always surprising, because humans can do so much effortlessly.”

Now the Defense Advanced Research Projects Agency, or Darpa, the Pentagon office that helped jump-start the first generation of artificial intelligence research in the 1960s, is underwriting three competing efforts to develop robotic arms and hands one-tenth as expensive as today’s systems, which often cost $100,000 or more.

Last month President Obama traveled to Carnegie Mellon University in Pittsburgh to unveil a $500 million effort to create advanced robotic technologies needed to help bring manufacturing back to the United States. But lower-cost computer-controlled mechanical arms and hands are only the first step.

There is still significant debate about how even to begin to design a machine that might be flexible enough to do many of the things humans do: fold laundry, cook or wash dishes. That will require a breakthrough in software that mimics perception.

Today’s robots can often do one such task in limited circumstances, but researchers describe their skills as “brittle.” They fail if the tiniest change is introduced. Moreover, they must be reprogrammed in a cumbersome fashion to do something else.

Many robotics researchers are pursuing a bottom-up approach, hoping that by training robots on one task at a time, they can build a library of tasks that will ultimately make it possible for robots to begin to mimic humans.

Others are skeptical, saying that truly useful machines await an artificial intelligence breakthrough that yields vastly more flexible perception.

The limits of today’s most sophisticated robots can be seen in a towel-folding demonstration that a group of students at the University of California, Berkeley, posted on the Internet last year: In spooky, anthropomorphic fashion, a robot deftly folds a series of towels, eyeing the corners, smoothing out wrinkles and neatly stacking them in a pile.

It is only when the viewer learns that the video is shown at 50 times normal speed that the meager extent of the robot’s capabilities becomes apparent. (The students acknowledged this spring that they were only now beginning to tackle the further challenges of folding shirts and socks.)

Even the most ambitious and expensive robot arm research has not yet yielded impressive results.

In February, for example, Robonaut 2, a dexterous robot developed in a partnership between NASA and General Motors, was carried aboard a space shuttle mission to be installed on the International Space Station. The developers acknowledged that the software required by the system, which is humanoid-shaped from the torso up, was unfinished and that the robot was sent up then only because a rare launching window was available.

“We’re in a funny chicken-and-egg situation,” Dr. Brooks said. “No one really knows what sensors or perceptual algorithms to use because we don’t have a working hand, and because we don’t have a grasping strategy nobody can figure out what kind of hand to design.”

Dr. Brooks is also tackling the problem: In 2008 he founded Heartland Robotics, a Boston-based company that is intent on building a generation of low-cost robots.

And the three competing efforts to develop robotic arms and hands with Darpa financing — at SRI International, Sandia National Laboratories and iRobot — offer some reasons for optimism.

Recently at an SRI laboratory here, two Stanford University graduate students, John Ulmen and Dan Aukes, put the finishing touches on a significant step toward human capabilities: a four-finger hand that will grasp with a human’s precise sense of touch.

Each three-jointed finger is made in a single manufacturing step by a three-dimensional printer and is then covered with “skin” derived from the same material used to make the touch-sensitive displays on smartphones.

“Part of what we’re riding on is there has been a very strong push for tactile displays because of smartphones,” said Pablo Garcia, an SRI robot designer who is leading the design of the project, along with Robert Bolles, an artificial intelligence researcher.

“We’ve taken advantage of these technologies,” Mr. Garcia went on, “and we’re banking on the fact they will continue to evolve and be made even cheaper.”

Still lacking is a generation of software that is powerful and flexible enough to do tasks that humans do effortlessly. That will require a breakthrough in machines’ perception.

“I would say this is more difficult than what the Watson machine had to do,” said Gill Pratt, the computer scientist who is the program manager in charge of Darpa’s Autonomous Robot Manipulation project, called ARM.

“The world is composed of continuous objects that have various shapes” that can obscure one another, he said. “A perception system needs to figure this out, and it needs the common sense of a child to do that.”

At Willow Garage, Dr. Bradski and a group of artificial intelligence researchers and roboticists have focused on “hackathons,” in which the company’s PR2 robot has been programmed to do tasks like fetching beer from a refrigerator, playing pool and packing groceries.

In May, with support from the White House Office of Science and Technology Policy, Dr. Bradski helped organize the first Solutions in Perception Challenge. A prize of $10,000 is offered for the first team to design a robot that is able to recognize 100 items commonly found on the shelves of supermarkets and drugstores. Part of the prize will be given to the first team whose robot can recognize 80 percent of the items.

At the contest, held during a robotics conference in Shanghai, none of the contestants reached the 80 percent goal. The team that did best was the laundry-folding team from Berkeley, which has named its robot Brett, for Berkeley Robot for the Elimination of Tedious Tasks.

Brett was able to recognize 68 percent of a smaller group of 50 objects. And the team has made progress in its quest to build a machine to do the laundry; it recently posted a new video showing how much it has sped up the robot.

“Our end goal right now is to do an entire laundry cycle,” said Pieter Abbeel, a Berkeley computer scientist who leads the group, “from dirty laundry in a basket to everything stacked away after it’s been washed and dried.”

Source: http://goo.gl/OqUGU


Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk