Enabling Great Robot Migration from factories

Jakub Tomášek
10 min readMay 6, 2018

Hollywood did a terrible service to us, roboticists. The picture of fierceless Arnold Schwarzenegger with steel jaw and a colt in hand now pops up in mind with the word “robot”. We somehow always tend to disappoint some visitors in our labs with robots being cumbersome and slow, and not really as handsome as Arnold.

Actually, in my humble opinion, Star Wars did waay better job painting the robots of the future — clumsy and slow to keep up with their human overlords. And not all being humanoids. I definitely prefer the picture of R2-D2 or C-3PO to Terminator. It’s also not that scary depiction of the future!

We, however, did not come even close to those far-fetched robot depictions. While robots have been working tirelessly in factories for decades and now they are being deployed in warehouses, their tasks are still very repetitive. They are directly programmed by humans and with very little autonomy. Those are environments where we can guarantee certain rules are followed; that keeps the number of situations the robot must deal with manageable.

But dealing with the wild west of real life is whole other level of challenge.

Or is it?

Let’s pause there. I should tread carefully. “Robot” is a pretty widely defined term — basically even your washing machine classifies as a robot. (Do you know Robot or not? podcast?)

Yet, have you thought of your washing machine to be a robot? It’s certainly intelligent —it washes the clothes autonomously, it makes a decision based on clothes weight in its drum and it interacts with people to tell them it has done its job. Maybe, if there was such thing as a robot in 1910 when it was invented we would call it nowadays “washing robot”; but Čapek coined the term robot in R.U.R. much later in 1920.

So, many of the machines are robots, in fact nowadays most. We just don’t call all of them that way. The term has been mostly reserved for machines performing tasks which are hard and in a way that resembles humans or animals. Over the past few decades, the term “autonomous robot” has been dominated mostly by academia. Academics defined several subtasks like localization/navigation, planning, manipulation, human-robot interaction, computer vision, etc. Those are tasks closely relatable to us because we perform these ourselves all the time (without much effort). Of course, robotics encompasses more — notably mechanical and electrical design.

Autonomous robots in the wild west

So, what autonomous robots are out there? There is surprisingly little of truly autonomous robots deployed in the real world which are actually working.

Mars rovers

The first robots with autonomy were NASA’s Mars rovers. Limited bandwidth and the 8–40 light-minutes distance rendered teleoperation impossible.

Soujourner, a little rover launched in 1996, was a reactive system traveling to commanded waypoint avoiding rocks using stereo vision limited to 20 points. It traveled 100 meters. Spirit and Opportunity, two sibling rovers launched in 2003, have had significantly superior autonomy with pose estimation, hazard detection, and local motion planning. They also can pick scientifically interesting spots along the path. Opportunity is still healthily exploring Mars now in 2018 and has covered almost incredible 50km. The last rover Curiosity, a giant compared to its predecessors, is faster and uses global planner which lets it go further on its own avoiding rocks. All the rovers are checked and commanded by a human “driver” on daily basis.

Roomba

Ok, the cost of Curiosity at more than $2.5 billion is certainly steep. Consumers are looking for something maybe slightly more affordable. Currently, the only consumer robots which are actually useful are Roombas and variations on it like lawn mowers or window cleaners.

Original Roombas were quite dumb— they hoovered in the straight lines and when they bumped into an obstacle, they randomly turned. The concept initially developed by iRobot has been copied by many other companies; over time they increased the “intelligence” of these robotic hoovers and decreased the price as Chinese companies have jumped in. For example, Dyson 360 has a 360° camera building detailed floor plan so it does not revisit spots in the flat.

Yet, this progress has been incremental and the initial release of the simple Roomba in 2002 to the public sparked this whole field of home robots. I think there is a lot to learn from this success.

iRobot was started by Rodney Brooks (among others) who have been famously a proponent of reactive architecture for robots. The reactive architecture resembles the behavior of animals — for example, a fly follows certain smells (yes, they also like the smell of a poop) and if there is a movement, they fly away. This simple strategy lets them thrive, particularly if there is lots of poop. In fact, what we call “animal instincts” is what we fall back on when fast reactions are needed. And with this in mind Roomba was built.

Unmanned Aerial Vehicles (UAVs)

In todays lingo called drones.

Most of the UAVs— either military or consumer — are teleoperated(=drones). Some pack elementary autonomy like trajectory following and simple obstacle detection. There is, however, an exception — recently, MIT startup Skydio launched its drone which exceeds anything that has been seen so far in terms of UAV autonomy. It can follow a moving object without crashing in any environment, even a dense forest. You can actually spend hours watching this four-prop drone following people through the forest on Youtube.

It achieves that using 13 cameras for the 3D vision to detect obstacles and a motion planning algorithm. You don’t control the drone directly, just tell it what to track and what kind of shot you want.

Waymo’s self-driving cars

Self-driving cars are, in fact, robots and they are the state-of-the-art of robot autonomy. The current direction is very ambitious — deploying robots on current roads in real traffic with other humans. Robots have been closest to humans ever in a life-and-death scenario. While self-driving cars have not been yet deployed on their own, many companies test them in the real world, often supervised by a human safety driver. It appears that currently only one company — Waymo — is close to the goal. (That might be however wrong as there is too much at stake and other companies might be deliberately concealing how advanced they are.)

Don’t understand it wrong. Self-driving cars are nothing new. In fact, CMU has completed a 5000-km cross-country journey in the Navlab project in 1995; 98% of the journey being autonomous. This 2% have been bothering thousands of engineers and researchers for the past 23 years.

Yet, driving on public roads is only a limited picture of the real world. There are rules which most of the drivers dutifully abide. The 2% still being solved are the exception from these rules — missing lanes, weather changes, traffic accidents, jaywalkers, etc. Meanwhile, when we fully control the environment, like in a factory, self-driving is easy — autonomous trains have been deployed since the 80s.

So, current autonomous robots are far from the picture of the Terminator. Their autonomy effectively lies in avoiding rocks, finding the way back to the docking station, avoiding trees, and keeping lane while avoiding other humans. But they do this pretty well. And none of them walks.

There have been many advances in robot walking — notably, a series of videos of Atlas 2 from Boston Dynamics walking in the snow and making backflips circled the world and sparked again the discussions about the threat of Skynet. It’s fascinating how people associate intelligence with such a “basic” human skill like walking. But Atlas 2 looks indeed like a man.

DARPA Robotics Challenge showed that wheeled robots and quadruped robots are still superior to bipedals just because of more robustness.

We make the robots walk bipedally mainly to fit into our environment, in our concept of a robot.

To make it clear, I regard walking as a really challenging and fun problem of control. We can make a great use of walking robots, particularly in challenging terrain like in case of rescue. Meanwhile quadrupeds can go anywhere. Bipedal walking is however hardly an advantage for the majority of tasks and adds unnecessary complexity in mechanical design and software, and increases power consumption. It has hardly anything to do with the actual intelligence.

Robotic world

George, a state-of-the-art autonomous delivery robot, was on an exciting mission — carrying a pack of condoms to a young couple Martin and Amanda. They were living in a beautiful apartment building on the 15th floor.

Clocks were ticking, George’s company FastFast Delivery promises half-hour delivery for their members, he had to hurry. Approaching the building, George already knew the two entrance ramps available to get to the basement. George reached the first ramp, the ramp was blocked by a 20 by 20 cm object in the middle. His image recognition algorithm told him that with 40% probability it is a poop, 20% it is a snake, 10% probability it is an ice cream. He could not lose the time with this and continued to the second ramp which was clear. He started climbing.

But, George’s worst nightmare came true. All of the sudden, an obstacle 1m wide and 1.3m tall was blocking the ramp and quickly approaching. George stopped; he couldn’t quite identify it — it had wheels and legs, but no face, there were even some boxes. George could not understand that it was a handicapped man in a wheelchair carrying a pile of boxes, and mainly that he did not see him.

There was little time to react. And George usually takes his time to react. The crash was inevitable.

The man left by ambulance. George’s parts were picked up by an exhausted delivery guy 1 hour later — he was promised by the FastFast company to be everyday home before 5 pm, this was his third crash today.

And small Amanda was born 9 months later.

The real world is complicated. Moreover, the curse of dimensionality plays cruelly against us. If solving a relatively structured problem like autonomous driving took 30 years to thousands of best engineers, how long it will take to solve the next challenge?

The biggest issue is testing — robots are complex cyber-physical systems and making sure they work requires extensive monitored testing in the real situations; simulation cannot replace the real world testing (at least yet). And if the robots keep working without understanding what they do, there will always be situations which they cannot intelligently cope with.

There are only two solutions to the problem. We either can wait until the artificial general intelligence (AGI) comes or we accept we do not have that capability now and we rather make our world robot-friendly — looking like slightly more of a factory with rules. That factory where robots can move with more confidence.

I believe that robot-friendly environment will be the main enabler for Great Robot Migration.

Why do I even have to highlight this? The reason is that currently, most of the effort goes into making the robots smarter but also more complicated. Meanwhile, only little effort goes into improving the infrastructure to make it more convenient and cheaper to deploy.

Let’s make objects easily graspable and mark them with machine-readable codes; let’s put machine-readable codes around for easy localization where GPS signal lacks; let’s use ramps or elevators instead of stairs. Let the robot call an elevator by sending a packet over a network instead of pressing a button… And we can deploy the robots tomorrow, not in 30 years!

And it actually costs little compared to the decades of the complex software development.

Salvation by teleoperation

You are in a boring 2-hour meeting. Your phone buzzes with a message: “This is George. I ran into troubles, could you help me resolve it?” Interface to your home robot George pops up; you see he is in your bedroom holding a bra: “Sir, I found this unmarked piece of clothes, is it garbage and should I throw it away?” You shamefully look around if any of your colleagues noticed and then navigate George to leave it on the bed.

Teleoperation has been undervalued in robotics. In the end, an autonomous robot should be, well, autonomous. Self-driving companies only recently realized that at a certain point, the teleoperation will be necessary when operating their car fleets. When there is no one in the car, who will resolve that situation with a policeman, or the crash?

Teleoperation does not replace the human itself. But it makes him safer, more comfortable, and more efficient. US army pilots now die of smoking, heart attacks and falling from bed, like normal folks do —the plane is on the other side of the planet.

If there is the human attention needed only 2% of all the time a robot operates, a single teleoperator could take care of 50 robots.

Effectively, there is one more thing. For some tasks, it is still more complicated to tell the robot what to do than doing the task yourself. Here teleoperation could come in handy.

Combined with learning, teleoperation could bridge the gap to the fully-autonomous robots.

I believe that teleoperation will be the second big enabler for Great Robot Migration.

I imagine control centers similar to today call centers where cheap operators from India will carefully navigate robots through hard times.

--

--

Jakub Tomášek

Screaming into the pillow about #robotics 🤖, #spaceexploration 🚀, and #asianweirdshit 🌏🥢🍙. Deploying autonomous 🚗 in Singapore and driving rovers for @ESA