Robot Factors for Automated Processes


As we develop more ways to automate processes, environments specifically tailored to the capabilities of a robot will improve outcomes. The rules of robot factors are still being written. Here are 5 questions to ask when designing a process for a robot:

1. Is it within the robot’s capabilities?

Set robots up for success by working within their capabilities. For example, there are many things humans can easily do that robots can’t. A robot’s skills with visual identification, audio processing, mobility, and dexterity may need to be evaluated to see if human assistance is needed.

Engineers must sometimes make tradeoffs based on the capabilities of different types of robots. For example, a SCARA robot is a 3- or 4-axis machine that can work at a high speed to manipulate objects on a horizontal plane. In contrast, 6-axis robots have a bit more flexibility, and their movement is more comparable to a human arm. However, 6-axis robots can be much slower than SCARA robots.

Bottom line, assess what capabilities your robots have and if they’re right for a given job. In some cases, a manual approach – or a different robot – would be best.

orange robot arm at work

2. What info does the robot need to understand its environment?

Robots’ capacity for interpreting their environment is limited. Interpretive skills must be programmed so that the robot can understand critical parts of the environment. Designers are also limited by the quality of the audio, video, and touch-related sensors that the robot possesses. Signal interference, such as outside noise, is an obstacle.

The human brain is the motherboard for our ability to think, move, and communicate. Scientists have to find a way to mimic this, and they may actually be on a promising path already. Researchers at the University of Maryland developed a robot that can “learn” how to cook by observing carefully-crafted YouTube videos. They designed two neural networks: one to understand the objects being used, and the other to process what the human is doing. The whole point is to give the robot the tools to learn and understand cooking within its own environment.

If we want to realize the full benefit of automation, it’s our responsibility as designers, engineers, and innovators to provide robots with the equipment they need to be successful.

robot welding in factory

3. Does the robot have the complete roadmap?

Humans are very good planning. We can hold a complete (and often highly sophisticated) roadmap in our heads and fill in the intermediary steps between where we are and where we want to go. Robots don’t have this skill. They may skip small steps that are a necessary part of the desired outcome unless we tell them exactly what to do. Designers must be careful that the robot has the complete, detailed “recipe” for the outcome because the robot is not able to course-correct in the same way a human can.

Angela Lim, an Assistant Professor at Simon Fraser University, summed this up well after an experience with hospitality robots in Japan: “Today’s AIs, much like the robot I encountered in Osaka, are “weak”—they don’t have any real understanding. Instead, they are powered by giant rulebooks containing massive quantities of data stored on the Internet. They can act intelligent but can’t understand the true meaning of what they say or do.”

This is why robots need human intervention to think and operate. If we share the roadmap – through planning, programming, and implementation – then robots will be more successful in their work.

4. How does it detect, report, and deal with malfunctions or incorrect results?

Designers must consider what kinds of errors or malfunctions the robot might face and tell the machine how to deal with them or report them. With the development of error recovery methods, designers give robots the tools to identify incorrect results or problems and solve them. In addition, robots must understand how to resolve roadblocks or conflicting information.

While this is what should happen, unfortunately, it’s not always the case. Between 2000 and 2013, there were 144 deaths during robot-assisted surgeries, 1,391 injuries, and over 8,000 counts of device malfunctions. Almost two million robot-assisted surgeries happened during this time. The odds are in a patient’s favor, but for something as high-stakes as surgery, striving for five nines reliability is important (99.999%).

If we seek to assign important tasks like these to robots – which require precise precision, something which robots are very effective with – then we should also plan for contingencies.

electronic board

5. Does it meet safety and ethical standards?

Another challenge designers face is how to qualify robotic systems to safety and ethical standards. If you recall, one of the founding principles of robotics (from Isaac Asimov’s Three Laws of Robotics) is that “a robot may not injure a human being, or, through inaction, allow a human being to come to harm.” So, how can scientists, designers, and engineers ensure that their robots uphold this rule and abide by standard ethics? Working with a regulating group with an emphasis on robotics is a great start. One of the closest things we have to a regulating group is the Robotics Industries Association (RIA). Not only do they provide industry information for engineers, managers, and executives, but they also help come up with regulatory standards.

Currently, the RIA is working in partnership with the Occupational Safety and Health Association (OSHA) and the National Institute for Occupational Safety and Health (NIOSH) to create best practices for workplaces that use industrial robotics. The group is also building safety requirements for industrial robots and industrial robot systems (ISO/CD 10218-1 and ISO/CD 10218-2). The RIA has developed industrial mobile robot safety requirements, guidance for users, testing methods for power and force limiting robot systems, and safety-related software (R15.08, RIA TR R15.806, and RIA TR R15.906). Groups like the RIA will be critical in steering the ethical and safety elements of robotic automation factors as innovation continues.


Jeff Alexander

Chief Science Officer

Jeff is Fresh’s Chief Science Officer and an innovator with over 20 years of engineering experience. Prior to Fresh, Jeff founded SiTech Research Test and Development, a comprehensive product development and test systems solution provider whose clients included Philips, Universal Electronics, and Fortune 100 companies.

Jeff’s career spans nearly a decade at Microsoft where he helped develop successful products like the original Xbox and the Xbox Kinect; as well as stints as Principal Hardware Architect at Nokia, developing innovative IoT products and image sensors; and as a lead engineer at LaserMotive, where he played a primary role in the design of a laser targeting and delivery system which won an award from NASA.

He holds a BSEE from University of Alaska Fairbanks.