Drones are playing an increasingly prominent role in modern warfare. At one time, the suggestion that robots would be used on the battlefield might be considered the stuff of science-fiction novels. But as electronic technologies become more and more integrated into our ways of waging warfare, robotic strategies seem more like science and less like fiction. All one needs for proof is to observe the growing sophistication of miniature unmanned aerial vehicles (UAVs), and how more and more organizations—including police departments—are considering their use for general law-keeping and surveillance functions.

The use of UAVs for surveilling some difficult-to-reach areas, such as the border regions of Afghanistan and Pakistan, is no longer in question. Nations like Israel and South Korea have employed armed robots to patrol their borders. The United States’ Defense Advanced Research Projects Agency (DARPA), has even marked the significance of robotics for warfare by inviting research and development firms to partake in the DARPA Robotics Challenge (DRC).

Headed by Program Manager Dr. Gill Pratt, the primary technical goal of the DRC is the development of ground robots capable of completing complex tasks in areas or environments that might be too dangerous for human beings. The robots developed for these disaster-response operations should essentially be capable of replacing people: working with hand tools and perhaps even driving ground vehicles.

DARPA is encouraging widespread participation in the DRC, soliciting universities; small, medium, and large businesses; and even individuals who might be willing to contribute and compete in the challenge. The defense organization is hoping to attract thinkers with novel ideas on robotics, and how robots can be made “more human.”

Some of the advances sought by DARPA relate to the robots and some to the operators. Improvements in robotic technology will be judged by such parameters as mobility, dexterity, strength, and platform endurance. But the success of this competition will also depend on the effective control of the robot by inexperienced operators and under less-than-ideal conditions (such as with intermittent communications signals).

As a secondary goal, DARPA is hoping to spread interest in robotics. By making hardware and software more accessible to interested contributors, the agency is hoping to boost its yield of new robotic ideas. Perhaps by providing government-furnished equipment (GFE) to interested participants—such as robotic development platforms—those companies, universities, and individuals can receive the motivation they need to make a real contribution. In addition, robotic development teams will have access to a simulator created by DARPA and populated with models of robots, robot components, and different field environments.

Sophisticated robots provide an alternative to the use of humans in battlefield and other hostile environments. DARPA has been a long-time proponent of the potential for intelligent robots—machines that are not only capable of following the commands of a human controller, but also of making decisions when necessary. Of course, companies involved in the development of intelligent robots have often voiced a concern that may resemble something from a science-fiction novel: What should happen if one of these intelligent robots should make a decision that should have been made by a human?

While this question may appear to reside more in the realms of philosophy and ethics than that of engineering, the engineering reality of it is not far away. Given the computing power of available microprocessor chips, operating programs capable of performing rudimentary decision making can readily be installed within the control centers of new intelligent robot designs.

Because of the amount of data that can be captured by remote surveillance, by means of UGVs or UAVs, shifting decision making from a human operator to a machine (closer to the source of the surveillance data) is not inconceivable in the near future. DARPA’s interest in improving drones and video surveillance is nominally to avoid the misidentification errors that lead to fatalities on the wrong side. If decision-making on the part of those drones—such as by comparing detected (unknown) images to known images—can be applied to help avoid costly mistakes, then these investments in robotics technology will be well worth it.