How to Investigate When a Robot Causes an Accident

How to Investigate When a Robot Causes an Accident

Nowadays, robots are becoming more prevalent in our daily lives. They can be beneficial (bionic limbs, robotic lawnmowers, or robots that bring meals to quarantined individuals) or entertaining (robotic dogs, dancing toys, and acrobatic drones). Perhaps the only constraint on what robots will be able to accomplish in the future is one's creative imagination.

But what happens if they do something that harms us rather than doing what we want? What happens, for example, if a bionic arm is involved in a car accident?

Accidents involving robots are becoming a concern for two reasons. First, as the number of robots increases, so will the number of accidents they are interested in. Second, we're becoming more adept at creating more complicated robots.

When a robot becomes more complex, it becomes more difficult to understand what went wrong.

The majority of robots are powered by artificial intelligence (AI). AIs can make decisions similar to those made by humans (though they may make objectively good or bad ones). These decisions could range from detecting an item to interpreting speech.

People educate AIs to make these decisions for robots based on data from massive datasets. Before giving the AIs the task, people check them for accuracy (how well they perform what we want them to do).


Picture: TheConversation


People can build AIs in a variety of methods. Consider the robot vacuum as an example. It might be programmed to redirect anytime it collides with a surface in a random direction.

Alternatively, it might be programmed to map its surroundings to detect impediments, cover all surface areas, and return to its charging base. While the first vacuum collects data from its sensors, the second will transfer it to an internal mapping system. In both circumstances, the AI gathers data and makes a conclusion based on it.

The more complicated a robot's capabilities, the more information it must interpret. It may also include evaluating numerous sources of one sort of data, such as a live speech, a radio, and the wind in the case of auditory data.

As robots become increasingly complicated and capable of acting on various information, determining which information the robot performed on becomes more critical, especially when harm is committed.

Accidents Occur


Picture: TheConversation

Robots, like any other product, may and do go wrong. Sometimes this is due to an internal problem, such as the robot failing to recognize a voice instruction.

Sometimes it's caused by something external, like a faulty sensor on the robot. And it may be a combination of the two, such as the robot not being meant to work on carpets and "tripping." It would be best to consider all probable factors in robot accident investigations.

While it is annoying if the robot is harmed when anything goes wrong, we are significantly more worried when the robot causes or fails to minimize injury to a person.

For instance, if a bionic arm fails to grip a hot beverage, spills it on the owner, or a care robot fails to register a distress call after a frail user falls.

What makes robot accident investigation distinct from human accident investigation? Notably, robots lack motivation.

We are interested in understanding why a robot decided based on the precise set of inputs that it had at the time.

Was there a miscommunication between the user and bionic arm, as in the case of the bionic arm example? Did the robot misinterpret several signals? Unexpected lock?

Could the robot not "hear" the plea for aid over a noisy fan in the case of the human falling over? Or did it have difficulty understanding the user's speech?

The Mysterious Black Box


Picture: TheGuardian

A fundamental advantage of robot accident investigation over human accident investigation is the possibility of a built-in witness.

Commercial flights have a comparable witness: the black box, designed to resist plane crashes and offer information on what caused the catastrophe.

This knowledge is beneficial in comprehending accidents and preventing them from occurring in the future.

As part of RoboTICS, a project focusing on responsible innovation for social robots (robots that interact with humans), we developed the ethical black box: an internal record of the robot's inputs and activities.

The ethical black box is constructed for each type of robot it occupies and designed to record all information on which the robot acts. It could be by voice, vision, or even brainwave activity.

We put the ethical black box through its paces on a range of robots in the laboratory and simulated accident scenarios. The goal is for the open black box to become a regular feature in robots of all types and uses.

While the data captured by the ethical black box must still be interpreted in the event of the accident, having this data in the first place is critical in allowing us to investigate.

The investigative process provides an opportunity to guarantee that the same mistakes do not occur again.

The ethical black box is a method for not just building better robots but also for responsibly innovating in an exciting and dynamic sector. 

1 ratings
Paul Syverson
Paul Syverson
Paul Syverson is the founder of Product Reviews. Paul is a computer scientist; he used to carry out a handful of significant studies which contributed to bringing in many special features on the site. He has a huge passion for computers and other tech products. He is always diligent in delivering quality writings to bring the most value to people. |