Political and Legal Influences of Robotic Engineering

Robotic engineering deals with the design construction, and application of robots. High reliability accuracy and increased speed of operation are some of the major benefits of robotics. Robotics also leads to higher productivity. Use of robotics may have a negative impact. They may directly or indirectly inflict injury on humans or destroy property. The injury could be accidental due to wrong human instructions on the robotics. On the other hand, robotics may lead to indirect harm on workers. This is due to the fact that use of robotics results in job redefinition or job displacement. While, they are not as effective, robotics are a much better improvement on the society today because of the ability to work in environments that are unsafe or inhospitable for humans, freedom from human characteristics like boredom and the ability to do dangerous tasks. Through their creations, a robotics engineer helps to make jobs safer, easier, and more efficient, particularly in the manufacturing industry (Bogue, 2014).

Despite the fact that robots are seen as a replacement to humans, in most instances, humans would continue to be in control. They may be in control of the robots or veto the course of action that the robot should take. Robots would also be able to interact with humans. For example unmanned aerial vehicles (UAVs) can fly for a longer period that humans can endure. However, they are still controlled by humans who have to stay awake for hours while controlling them. This makes some UAV operators experience fatigue due to being overworked, which increases the risk of errors in judgment. Even when people do not have fatigue, they are prone to make bad decisions. Critics of UAVs claim that UAV operators may control the drones thousands of miles away. Therefore, they may become and lack care of killing people due to the long distance between them and the people. This may make people have unjustified strikes (Pagallo, 2011).

Robots that have defensive or offensive capabilities raise several concerns on compliance with international humanitarian law – the laws of war. For example, various parties claim that it is illegal for robots to make their own decisions on whether to attack a certain area as some robots currently do. This is due to the fact that the robots do not have the technical ability to make a distinction between combatants and noncombatants. This principle of distinction can be found in various laws of war such as the Geneva Conventions and the ability to engage in a just war. This requires people not to target noncombatants in war. However, it is difficult for a robot to make a distinction between a terrorist with a gun and a girl pointing an ice cream towards it. In fact, even humans find it difficult to make a distinction between a shepherd who has gun meant to protect his flock of goats and a terrorist with an AK-47.

The use of lethal robots may also lead to disproportionate use of force in relative to the military objective of the strike. This leads to collateral damage or unintentional death of civilians. This raises the question as to what is acceptable number of innocent civilians that should be killed for every bad guy killed. Is it 2:1 or 10:1. It is difficult to come up with a consensus on the number. It is likely that a certain target may pose a great threat that even a rate of collateral damage of 500 to 1 may be acceptable to certain parties.

Even the above problems are solved other problems are likely to arise. For example if a robot that targets only combatants without leaving any collateral damage is created, another problem may arise. This is due to the fact that the International Committee of the red Cross (ICRCR) bans the use of weapons that lead to a mortality rate of more than 25% and a hospital mortality rate of more than 5%. A robot that kills every human it targets would have a mortality rate of almost 100%. This would be much higher than the 25% threshold stipulated by ICRC. Creation of such a robot may be possible since robots have high levels of accuracy. This robot would be fearsome and deadly. This would violate the principle of a fair fight in war. Poisons were also banned since they were inhuman and too effective. Creation of such a robot would also raise ethical questions as to whether it is right for people to create machines that kill other people (Anderson & Waxman, 2012).

Various issues are related to the use of enhanced warfighters. For example, it is wrong to torture an individual who can resist pain using robotics or genetically engineered drugs. This raises the question as to whether cutting the limb of a robot would be referred to as torture. Since soldiers do not take away all their rights when they are recruited, it is vital to determine what kind of consent is required to involve soldiers in biomedical experiments. Use of robotic enhancements also raises the question as to whether the soldiers should be treated differently. In addition, it is vital to determine how the use of enhancements on certain soldiers affect the cohesion of a unit if the enhanced soldiers work alongside normal soldiers (Anderson & Waxman, 2012).

Download full research paper on Political and Legal Influences of Robotic Engineering Or order a plagiarism free paper at an affordable price. 

Share with your friends
Order Unique Answer Now

Add a Comment