Robots are no longer confined to factory floors behind safety fences. Collaborative and humanoid robots now operate in direct proximity to people, shifting safety from a contained engineering problem to a system-level risk with real human consequences.
Humanoid robots are currently being designed for use in public and semi-public spaces, such as warehouses, restaurants, and retail environments, and eventually in homes. Morgan Stanley Research projects that up to a billion humanoid robots could be deployed by 2050.
When robots work closely beside humans, failures in sensing, control, or communication are no longer tolerable edge cases. There are design flaws with direct safety, liability, and certification consequences. Functional safety ensures that safety-related functions continue to operate even when faults occur. But in robotics, it also defines whether a system can be deployed, certified, and trusted in human environments.
For electronics and embedded control engineers, that means doing more than just meeting standards. It’s about making clear architectural choices for sensors, controllers, diagnostics, communication, and response time to ensure a robot can detect hazards and react quickly enough to protect human life.
Challenges of functional safety in robotics
Humanoid and quadruped robots expose a hard truth: Many functional safety architectures designed for traditional industrial robots do not scale to mobile, space-constrained, human-facing systems.
Traditional industrial robots rely on distributed, multi-module safety architectures. However, this approach breaks down for humanoid and quadruped robots, where limited body space makes it impractical to deploy multiple discrete safety hardware modules.
One solution is to integrate the safety control algorithms on the same hardware platform as non-safety motion control and AI algorithms. This saves space and reduces latency, but also concentrates risk.
When safety and non-safety software share the same hardware, engineers must design robust mechanisms to prevent faults in non-safety functions from affecting the execution of safety functions. This typically involves hardware isolation features, memory protection, independent execution paths, and strict software development processes.
To understand expectations, engineers should study the fundamental functional safety standards IEC 61508 and ISO 13849-1, as well as ISO 10218-1 and ISO 10218-2, which are currently the world’s most comprehensive safety standards for industrial robots.
While these standards define required safety outcomes, they deliberately avoid prescribing implementation, placing the burden of architectural judgment squarely on engineering leadership. That requires close cross-domain collaboration between hardware and software engineers, who must work together to build functional safety into system design from day one.
5 design considerations for functional safety systems in robotics
Engineers must first determine the required Safety Integrity Level (SIL) or Performance Level (PL) based on risk assessment. From there, they must design hardware and software architectures that can reliably achieve that level. To do so, they must consider multiple factors:
- Hardware redundancy: At the hardware level, functional safety typically starts with redundancy. How this is implemented varies by function, but it typically features a dual-channel architecture.
For example, take Tool Center Point (TCP) Speed Monitoring. This safety and control function measures and limits the speed of the exact point on a tool that performs work, e.g., the tip of a welding torch.
A fully redundant implementation built for human interaction might include:
- Two independent sensor inputs, such as dual encoders or position sensors
- Two independent processing paths, often using separate processor cores or chips
- Two independent safety outputs capable of stopping motion or removing power
With redundant structure, controls continue to function even if an individual element fails.
- Built-in diagnostics: Redundancy alone is not enough to ensure functional safety. Each channel must also include diagnostics to detect random hardware faults. This requires component-level analysis, often through Hardware Failure Mode and Effects Analysis (FMEA).
Diagnostic coverage must be calculated and documented to demonstrate compliance with the target SIL or PL. IEC 61508-2, Tables A.1 to A.14, provides detailed guidance on recommended diagnostic techniques for common hardware components. Engineers should reference these tables when designing safety-related circuitry.
- Fast response time: When it comes to avoiding injuries or harm to humans, milliseconds of latency can matter. That makes many robotics safety functions time-critical, including speed monitoring, position limits, and force or torque limits at the TCP.
Insufficient computing power can increase response times, turning theoretically compliant designs into systems that react too slowly in real-world conditions. For this reason, some modern safety controllers use higher-performance processor architectures rather than traditional low-end microcontrollers.
For example, NEXCOM uses a dual-ARM chip architecture to design its safety controller. This capability allows for faster calculations, resulting in a shorter safety reaction time.
- Safe communication protocols: Robotic systems often rely on networked communication between sensors, controllers, and actuators. From a functional safety perspective, the physical communication medium can often be treated as a “black channel.” This means that standard Ethernet or similar networks can be used without special safety-rated hardware.
Instead, safety is ensured through software-implemented safety-related communication protocols. IEC 61784-3 defines the relevant requirements and mechanisms to ensure that safety data is protected against corruption, delay, repetition, or loss, regardless of the underlying network.
- Large safety margin: Engineers must consider not only the safety level of individual components but also that of the system as a whole. These are not always directly equivalent. For example, when combined, many SIL 2 safety components might aggregate to a SIL 1 level.
To create a larger safety margin, it makes sense to incorporate components at a higher SIL than the overall system requires. For example, NEXCOM’s safety controller meets SIL 3 and Hardware Fault Tolerance (HFT) of 1 (per IEC 61508), and PL e with Cat. 3 (per ISO 13849-1). Integrating a SIL 3 safety controller into a robot system provides a larger safety margin, enabling the overall system to achieve a safety level of SIL 2 or higher.
A safer future for humanoid and collaborative robots
As robots move into human environments, functional safety cannot be treated as a compliance exercise. It must be designed as a first-order system constraint — one that shapes architecture, compute choices, and integration decisions from day one.
Leave a comment