Robots Making Robots

The Path Towards the Robot Mass Production

Jakub Tomášek
10 min readJul 19, 2018

Robots are finally getting the appropriate attention: companies are sniffing high profits, the popular media scare people about losing their jobs; robots are not anymore only a cliché movie theme. So, when they will they finally roll out? Or is it just a bubble?

Credit: Robot worker image via shutterstock.com

Why are robots so hard to build?

Earlier, I wrote an article pondering what keeps robots confined to the factories and labs. I argued we need to build a robot-friendly environment and satisfy ourselves with only 98% of autonomy. That will significantly simplify the hardware and software design. Then reiterate on the design to reach the 100% with less infrastructure.

But even then, the robots are really expensive and hard to build. We are trying to build machines significantly more complex than cars without the century-long path of design iterations and engineer training. And so there is a really high barrier to building such robots since it does not make economic sense in the short run.

Robots are, like cars, a combination of hardware and software — so-called cyber-physical systems. Comparing to software design, the physical domain adds significant complexity to the design process. It is challenging to test and verify that a robot does what it is supposed to do under any circumstances. It is hard to replicate things like timing or cover infinite combinations of the environment.

Additionally, it is hard to train engineers to know all the stuff necessary to design a robot. As a robotics software engineer you need to:

  • have a very good grasp of Math at least up to multivariable calculus, algebra, statistics,
  • have a good theoretical knowledge of mechanics (and control),
  • have a good — both first-principle and hands-on — knowledge of available hardware ranging from hundreds of types of sensors, through computers and routers, to motors,
  • you should have at least surface knowledge of 20+ network standards and any time delve deeper when using a particular standard,
  • you should be a decent C++ developer, and be able to process a large amount of data,
  • and have an insight into AI algorithms (like backpropagation, Bayesian classification, PCA, SVM, clustering).

This is obviously a lot to ask from one person. And that is why a typical profile of a robotics engineer is a PhD who spent 10+ years at university building robots.

Is the future in open source?

Until recently, robotics was a discipline only for academics and makers. They have gone around the complexity by using open source. A tool called ROS (Robotic Operating System) spread to the point that most of the present robots use it. ROS is an open-source platform which, in the first place, allows modularization of the robotics problem. It lets researchers focus only on their small problem and reuse the algorithms by the community to solve everything else.

ROS breaks the cycle. Credit: Comic by Willow Garage

The story of ROS is a great example of an open source project — ROS came from a small group of PhDs from Stanford with the ambition to build Linux of Robotics to break the cycle of writing robotic code from scratch, they found their own funding and called themselves Willow Garage. There were many similar efforts before, and after. The difference might have been that the folks of Willow Garage focused on the community and brought it together! Now there is a yearly conference called ROSCon which physically brings ROS users together.

It is exciting because the industry is also embracing ROS.

ROS became the standard in robotics defining the architecture of pretty much every autonomous robot (there are many robots in the world, those industrial ones are not using ROS though, at least not yet). It is exciting because the newcomers — companies which are getting into robotics right now — are also embracing ROS, ROS standards, and the open source.

[Unfortunately, not all newcomers to this field get the spirit of the open source and a number of companies take but don’t give back. Often, this is due to the old corporate policies regarding the code ownership.]

By accident, the community created the set of standards without the need of rusty ISO or IEEE organizations.

This might help avoid repeating the desparate situation in industrial robotics. Industry automatization companies like ABB, Kuka, or Mitsubishi often try to lock their customers into their ecosystem:

  • Each company has its own proprietary programming language so programs can’t be easily ported.
  • They use different communication protocols.
  • Mechanical interfaces are not standardized across the industry.
  • The companies limit the set of peripherals.
You never see robotic arms from different manufacturers on one assembly line. Credit: Kuka

This led to stagnation in innovation in robot manufacturing — factory floors did not change much in 30 years after the invention of robot workcells. But, after everyone has been ranting about Industry 4.0 for 8 years, things are finally changing! Also, in the past decade Universal Robots and Rethink Robotics were like a breath of fresh air — they brought manipulators at a lower cost to places where robots were unthinkable before — in the assembly lines working with people. They simplified the programming and added more autonomous features.

ROS2

With the companies developing robotic products based on ROS, the requirements on the middleware have become different and ROS1 can be quite limiting. These requirements were not in mind during the ROS architecture design; this motivated new branch called ROS2. ROS2 keeps the original ROS principles but fundamentally redesigns the internal architecture.

The question is whether ROS remains “just” a tool for prototyping or eventually makes its way into the future mass-produced robots.

What does it take to manufacture a robot?

Building a prototype and a product are completely different problems. Fortunately, we can learn from the other industries and their mistakes; particularly automotive and aerospace industries, which, similarly to robotics, produce safety-critical cyber-physical machines.

Ford made a breakthrough by introducing new manufacturing technology — assembly line. It allowed cars to reach ordinary people. This might be the biggest barrier — making robots cheap enough and useful enough to be mass produced.

Manufacturing requires lots of experience, many things have been learned by the trial and error over the years. Tesla recently learned that by running against a wall — Tesla was in big troubles to ramp up the production of their Model 3 in the past months.

Self-driving cars will set the precedence

Self-driving cars are the first autonomous robots with the potential to be deployed on the large scale. The process and the success/failure will likely inform the future robot deployment to the real world for the next few decades. It is not only the roboticists paying attention but also the automotive engineers — both sides have certainly a lot to learn from each other.

Interestingly, the industry avoided calling self-driving cars “robots” as Riddley Scott in Bladerunner called the robots “replicants” and not androids. It is likely a deliberate move to avoid associating the technology with such a prejudiced name connected with destruction and human doomsday. With the word robot, people think of Terminator and Terminator is certainly very bad for PR of very safe means of transport.

While autonomy for cars required a huge leap in sensor technology and software development, the existing automotive ecosystem is what made the leap feasible in just a decade. Waymo, Cruise, and co. are building upon the existing mass-produced cars by simply adding their own sensor suite and computers with their own software. The brain of the car is separate from the intestines which control the car — these two will be over time more integrated.

Unfortunately, no other autonomous robotics application has neither a manufacturing ecosystem to build upon nor such a financial incentive to develop systems.

Is the future modular?

The automotive industry reached a modular distributed architecture to build the control systems of the car. A typical car now has 100 control units which are connected via CAN network. Each of the units is taking care of a different subsystem like the motor, or the brakes. The subsystems are manufactured by different manufacturers — this introduces competition, lowers the barrier for a company to develop a new product, increases robustness as there is no a single point of failure, and simplifies certification. If something breaks, the whole module is replaced.

Some subsystems can be changed easily in the updated version of the car without impacting the rest of the system. The flaw of this is an accumulation of hidden bugs over time. Also, CAN is not secure and basically if any of the control units is hacked, the attacker can overtake over the whole car.

Meanwhile, the robot prototypes are currently usually centralized. Most of the sensors and actuators are connected to the central computer which is processing the sensor data like images and making the high-level decisions. Often, only the low-level control loops run on a real-time hardware, for example, DC motor controllers. There has not been much thought given to the robot cybersecurity.

ROS2, in fact, allows making robots distributed and safer. Unlike in ROS1, there is no central master to manage the communication between the nodes. It is possible to run ROS2 on embedded systems and it allows building sensors as plug&play ROS modules due to different communication layer. Also many of the robotics tasks, like DNN inference or image processing, can be hardware-accelerated for example using FPGAs, it might be possible to connect FPGAs as ROS modules, again plug-and-play.

Is the model-based design the future?

For some, it might be a surprise, but much of the software development in the automotive industry is done by moving blocks in Simulink. Code generation from Simulink to the control units has become the standard.

The automotive industry is known for its backwardness in software development, there is a large disconnect from the software development community and their practices. How big difference is between the entertainment system in Tesla and your superior-class BMW. Tesla — being a Silicon Valley company — hired proper UX designers and developers. But, in automotive they might be very well right about the model-based design approach for the embedded applications.

While there are groups which use the model-based design for robots, it is certainly uncommon. Many software developers might look at model-based design with contempt: it is just moving some blocks around, that’s for kids. But it is suitable for control loops design, computer vision, communication systems, and signal processing. And in the end, a robot is just a bunch of control loops.

Model-based design minimizes bugs and simplifies interfacing between modules. Also, the testing with the “physics” becomes easier. Also, model-based design And mainly it relieves the burden from the robotics engineer who does not have to be an amazing software developer to move blocks around and understand in detail the guts of each block.

Testing, testing, and more testing

Self-driving is hard, but still relatively easy compared to other ambitions in autonomous robotics. Meanwhile, it requires millions of miles to verify the safety and the function — just recently Waymo announced reaching 7 million miles of autonomous driving. The system is also tested in parallel to the real road using simulation — Waymo here claims 5 billion miles.

Btw, a mile is such a bad measure, I would at least not use an imperial unit ;-).

Here some software industry practices like test-driven development may come in handy. I feel that we roboticists don’t have the mindset for destructive testing, which is the core of test-driven development. Destructive testing of our robots is first very expensive, second feels just wrong. You give your robot a name, you spent months with “him”, it is kinda your baby and you would never bully your baby, right?

“Destructive” Testing in Boston Dynamics. Boston Dynamics likes to show off their robot bullying. More human-like the robot looks, more wrong it seems.

But destructive tests will be the necessary cost to get robust robots. A lot can be in fact done with help of simulation. And eventually, mass-produced robots must be cheap enough to destroy a couple of them…

The big part of building complex systems like robots is in integrating different work of different people. Many errors come up only during integration. Therefore integration is always a cumbersome process. Software developers commonly use continuous integration—developers commit their work at least once a day to the common development branch of work which is automatically tested. Errors can be detected early.

Continuous integration becomes again a more complex problem in robotics due to its physical nature. Fetch is doing physical continuous integration to test their warehouse solution. Tests are scheduled and performed automatically on a fleet of the robots in a mock warehouse. Data including videos are meanwhile stored on a server to inspect the behavior. Simulation can again be helpful.

The future is…unknown

We are fortunate to live in the time in which people strongly trust in the future. They can imagine what the robots can do for us (and to us) and can spend easily billions to fund robot ventures despite not being even nearly ready for the real deployment.

Robotics is now at a turning point. We have a movement of open source robots coming from academia which is being embraced by the industry so everyone can build upon the shoulders of giants. ROS2 will solve many limitations of ROS1 and allow the larger real-world deployment. We might see ROS hardware modules popping up in the future which will ease the robot development. ROS will likely remain the middleware for the low-volume markets and might define standards for the whole field.

Self-driving cars are meanwhile setting a precedent in mass-deployment of robots into the real world. If that goes well, mass-produced robots in the future might be modular and the low level-software may be developed using model-based design which will lower knowledge barrier to become a roboticist. We might have to destroy few robots to build robust solutions able to deal with the real world.

And, well, the future is uncertain but one thing is clear — eventually, the robots will be finally building and designing themselves.

--

--

Jakub Tomášek

Screaming into the pillow about #robotics 🤖, #spaceexploration 🚀, and #asianweirdshit 🌏🥢🍙. Deploying autonomous 🚗 in Singapore and driving rovers for @ESA