In warehouses, workers may slow down not because a robot is too close, but because they are unsure whether the robot sees them. In hospital hallways, staff step aside earlier than necessary simply because they cannot easily read what a delivery robot is about to do next. None of these moments reflects a failure of the robot’s technical safety. They reflect a breakdown in shared understanding between humans and machines.
That distinction matters. Safety in mixed human–machine environments is not only about what a system is capable of, but it’s also about how people behave around it.
Consider everyday driving. Even in modern vehicles equipped with Level 2 automation -lane keeping, adaptive cruise control, automatic emergency braking, and blind-spot monitoring - the human driver remains central to how safety unfolds in the real world. These systems shape how the vehicle and driver behave, which in turn shapes how everyone else on the road responds.
Imagine a driver and a pedestrian at a crosswalk. The driver stops, makes eye contact with the pedestrian, and the pedestrian looks back. That brief connection acts as a silent confirmation: each party knows the other is aware of them. It is an unspoken signal that allows the pedestrian to proceed confidently.
Similarly, when changing lanes, a driver uses a turn signal not only because the vehicle needs to adjust its internal warnings, but to inform other drivers of intent. There is a chain of communication at work between the machine, driver, and other road users.
Over the decades, the automotive industry has recognized that the display itself is part of the safety system. As digital clusters have become more sophisticated, the information presented to the driver - even seemingly minor things like driving mode changes - must be accurate, timely, and trustworthy because it can influence human behavior.
This is why safety-certified software platforms like QNX have become foundational to modern digital cockpits and instrument clusters. The information shown is safety-critical: speed, lane status, collision warnings, blind-spot alerts, and more. If that visual layer is misleading, delayed, or unreliable, the risk does not remain in software - it appears in real driving behavior. The display is not merely a user interface; it is part of the safety design.
Now contrast this with mobile robots operating in warehouses, hospitals, retail spaces, public environments, etc.
Many of these robots move fairly well from an engineering standpoint. Their trajectories are smooth, obstacle avoidance is reasonably precise, and their planning is consistent. Yet they rarely provide humans with clear, intuitive signals about their intention.
There is no universal robot equivalent to eye contact. There is no shared visual language that allows a person to know at a glance whether a robot sees them, is yielding, or is about to turn.
As a result, people compensate because there’s no driver-in-the-loop. They hesitate earlier than necessary, step farther away than required, or sometimes move in ways that are actually less safe because they are guessing the robot’s intent. Again, this is not about whether the robot is technically unsafe - it is about recognizing that safety in human environments is relational, not just mechanical.
This is where visual intent becomes critical.
If robots clearly communicated their intended movement, these cues would shape how humans behave. In doing so, they would effectively become part of the safety system rather than a cosmetic feature.
This creates an important distinction to extend lessons from automotive safety – ISO 26262 - into mobile robotics. The logic is similar, even if the driver is now replaced by the robot’s software.
As robotic standards such as ISO 3691-4, ISO 13482, and ISO 22440 continue to evolve, strong consideration should be given to treating human-facing visual cues as safety-critical elements: how intent is communicated, how consistent that language is across platforms, and how accurate it must be to avoid misleading people.
Ultimately, safe movement in human environments is not only about avoiding collisions, but rather about enabling people to move with confidence around machines - to feel seen, acknowledged, and informed rather than uncertain. Simple universal cues – like !!! for caution, STOP for stopping, or brief labels such as Yielding or Turning, could make a robot’s state legible from a distance. Floor projections showing the robot’s planned path projector could further reduce guesswork about what it will do next. These signals would be a welcome addition to existing safety systems, shaping how people move around robots.
Ultimately, a robot that moves well has achieved something important, but a robot that moves well and clearly communicates its intent represents a more complete vision of safe autonomy in the real world.
About the author
Winston Leung is Senior Manager at QNX.