Intelligent Robotic Behaviors for Landmine Detection and Marking

by David J. Bruemmer, Douglas A. Few, Curtis W. Nielsen, Miles C. Walton [ Idaho National Laboratory ]

This article discusses experimental results achieved with a robotic countermine system that utilizes autonomous behaviors and a mixed-initiative control scheme to address the challenges of detecting and marking buried landmines. By correlating aerial imagery and ground-based robot mapping, the interface provides context for the operator to task the robot. Once tasked, the robot can perform the search and detection task without the use of accurate global positioning system information or continuous communication with the operator. Results show that the system was able to find and mark landmines with a very low level of human involvement. In addition, the data indicates that the robotic system may be able to decrease the time to find mines and increase the detection, accuracy and reliability.

Landmines are a constant danger to soldiers during conflict and to civilians years after conflicts cease, causing thousands of deaths and injuries every year.1 It has long been thought that landmine detection is an appropriate application for robotics because it is a dull, dirty and dangerous task.2,3 The reality, however, has been that the critical nature of the task demands a reliability and performance that most robots have not been able to provide.4 On the one hand, many autonomous strategies rely on assumptions of accurate global positioning systems. When GPS is inaccurate or when it is jammed, the performance of this approach degrades quickly. On the other hand, teleoperated strategies are limited by the fact that the combined demands of navigation, sweep coverage and signal interpretation severely overload the human operator and can lead to poor performance.

In response to the limitations of both autonomous and teleoperated strategies, we present a mixed-initiative approach that allows the operator and robotic assets to work together to accomplish a countermine mission. Researchers at the Idaho National Laboratory, Carnegie Mellon University, and the Space and Naval Warfare Systems Center San Diego have developed a system that combines aerial imagery from an unmanned aerial vehicle with behavior-based autonomous search-and-detection capabilities on a ground robot to identify and mark buried landmines both in the physical world and in a digital representation of the world. The system's effectiveness was rigorously evaluated by the United States Army Test and Evaluation Command and by the U.S. Army Maneuver Support Center Futures Center (MANSCEN) at Fort Leonard Wood in Missouri.

Mission Requirements

The purpose of this research was to evaluate the effectiveness and suitability of an Autonomous Robotic Countermine System to proof a one-meter (3.2-foot) dismounted lane by searching for, marking and reporting detected landmines and marking the boundaries of the proofed lane. The intent was to provide the current military force with an effective alternative to dismounted lane countermine operations. MANSCEN determined that although accurate digital marking of landmine locations within a terrain map was desired, accurate physical marking of the mine locations was considered essential for the mission requirements.

Developing a successful solution required a complete understanding of the end-user's goals and requirements. This was accomplished with over two years of dialogue with MANSCEN and the U.S. Army Engineer School to develop and refine the mission requirements. Furthermore, numerous conversations with the Night Vision and Electronic Sensor Directorate at Fort Belvoir were required to discuss the capabilities and limitations of current sensor technologies.

Previous studies by MANSCEN had shown that real-world missions would involve limited bandwidth communication, inaccurate terrain data and sporadic availability of GPS. Consequently, task constraints handed down from MANSCEN demanded minimal dependence on: network connectivity (e.g., wireless Ethernet and radio communications); centralized control (e.g., off-board motion planning); GPS; and accurate a priori terrain data.

MANSCEN also emphasized the need for reduced operator workload and training requirements. The military operational requirements document specified that within the future combat system unit of action, there would no longer be dedicated engineers focused on the countermine mission; instead, anyone within the unit of action should be able to task the system. A final requirement was that the robotic system be able to handle cluttered outdoor environments. Although the robot platforms and sensor suite were important considerations, the goal of this effort was not focused on a particular robot platform or a particular countermine sensor; rather, the stated goal was to "provide portable, re-configurable tactical behaviors to enable teams of small UGVs [unmanned ground vehicles] and UAVs [unmanned aerial vehicles] to collaboratively conduct semi-autonomous countermine operations."

Technical Approach

Software behavior development. The control of the vehicle, mine detector and marking system were all integrated into the Idaho National Laboratory Robot Intelligence Kernel.5 RIK integrates algorithms and hardware for perception, world-modeling, adaptive communication, dynamic tasking and behaviors for navigation, exploration, search, detection and plume mapping for a variety of hazards (e.g., explosive, radiological). Robots with RIK can avoid obstacles, plan paths through cluttered indoor and outdoor environments, search large areas, monitor their own health, find and follow humans, and recognize changes within the environment. To accomplish the overall countermine search behavior, the RIK arbitrates between obstacle avoidance, waypoint navigation, path planning and mine-detection coverage behaviors, and human input (e.g., joystick control or dropping a target within the interface), all of which run simultaneously and compete for control of the robot. A key component of the deliberative capabilities of the robot is an occupancy-grid based map-building algorithm developed by the Stanford Research Institute.6 This algorithm uses probabilistic reasoning to pinpoint the robot's location with respect to the map and when features exist in the environment, the algorithm provides relative positioning accuracy of +/- 10 centimeters even when GPS is unavailable.

Robotic design. The air vehicle of choice was the Arcturus T-15 (see Figure 1), a fixed-wing aircraft that can maintain long flights and carry the necessary video and communication modules. For the countermine mission, the Arcturus was equipped to fly two-hour reconnaissance missions at elevations between 200 and 500 feet (61–152 meters). A spiral development process was undertaken to provide the air vehicle with autonomous launch and recovery capabilities as well as path planning, waypoint navigation and the ability to create an autonomous visual mosaic. The resulting mosaic can be geo-referenced if compared to a priori imagery, but at the time of this experiment, did not provide the positioning accuracy necessary to meet the 10 centimeter (4 inch) accuracy requirements for the mission. On the other hand, the internal consistency of the mosaic is very high since the image processing software can reliably stitch the images together.

Figure 1: The Arcturus T-15 airframe. All Graphics Courtesy of David J. Bruemmer

Carnegie Mellon University developed two ground robots (see Figure 2) for this effort. The robots were modified humanitarian demining systems equipped with inertial systems, compass, laser range finders and a low-bandwidth, long-range communication payload. A MineLab F1A4 detector was mounted on both vehicles together with an actuation mechanism that can raise and lower the sensor, as well as scan it from side to side at various speeds. CMU developed the signal processing to analyze the output from this sensor and provide the robot behaviors with a centroid location relative to the robot's position. A force torque sensor was used to calibrate sensor height based on sensing pressure exerted on the sensor when it touches the ground. The mine-sensor actuation system was designed to scan at different speeds to varying angle amplitudes throughout the operation. In addition, the Space and Naval Warfare Systems Center in San Diego developed a compact marking system that dispenses two different colors of agricultural dye. Green dye was used to mark the lane boundaries while red dye was used to mark the mine locations. The marking system consists of two dye tanks, a larger one for marking the cleared lane and a smaller one for marking the mine location.

Figure 2: Countermine robot platform.

Operator control interface design. Many scientists have pointed out the potential for benefits to be gained if robots and humans work together as partners.7,8,9,10 For the countermine mission, this benefit cannot be achieved without some way to merge perspectives from human operator, air vehicle and ground robot. In order for the operator to task the robot without relying on global positioning, imagery taken from the unmanned aerial vehicle must be integrated with the occupancy grid map built by the robot. One lesson learned in terms of air-ground teaming was that it may not be possible to automatically generate a perfect fusion of the air and ground representation especially when error such as positioning inaccuracy and camera skew play a role. Even with geo-referenced imagery, real-world trials showed that the GPS based correlation technique does not reliably provide the accuracy needed to support the countermine mission.

In most cases, it was obvious to the user how the aerial imagery could be nudged or rotated to provide a more appropriate fusion between the ground robot's digital map and the air vehicle's image. To alleviate dependence on GPS, collaborative tasking tools were developed that use common reference points in the environment to correlate disparate internal representations (e.g., aerial imagery and ground-based occupancy grids). As a result, correlation tools were developed that allow the user to select common reference points within both representations. Examples of these common reference points include the corners of buildings, fence posts or vegetation marking the boundary of roads and intersections. In terms of the need to balance human and robot input, it was clear that this approach required very little effort from the human (a total of four mouse clicks) and yet provided a much more reliable and accurate correlation than an autonomous solution. It is also important to note that the same interface tool that correlates aerial imagery with the ground map can also use satellite imagery or other forms of terrain data.

Once the imagery and the map are integrated, the imagery serves as a backdrop for the operator to task the robot to previously unknown areas. The representation used for this experiment implements a three-dimension, computer-game-style representation of the real world constructed on-the-fly.11The 3D representation maintains the relative sizes of objects in the environment and illustrates the map built by the robot as it relates to the aerial image and the position of the robot, obstacles, mines, and the path cleared by the robot. Figures 3 and 4 illustrate different perspectives of the interface used for the experiment. This fusion of data from air and ground vehicles is more than just a situation-awareness tool. It supports a collaborative positioning framework that exists between air vehicle, ground vehicle and human operator. Each team member contributes to the shared representation and has the ability to make sense of it in terms of its own, unique internal state.

Figure 3: Operator control unit with "close view" of mines and lane.

Figure 4: OCU showing correlation of robot map correlated with an aerial mosaic.

Experiment. To test the proposed system and mission requirements, personnel from the U.S. Army Maneuver and Support Center and the U.S. Army Test and Evaluation Command, both based at Fort Leonard Wood, Missouri, conducted an experiment 20–28 October 2005 at the Idaho National Lab's UAV airstrip. The U.S. Army TECO authored the experiment plan, performed the field experiments and certified all data collected. The experiment consisted of repeated trials of a dismounted route-sweeping task, and data collected included measurements of human, robot and overall team performance of the resulting system. Sweeping a dismounted lane required the robot to navigate a path to a target location while physically and digitally marking detected mines and the boundaries of the searched lane. A test lane was prepared on a 50-meter (55-yard) section of an unimproved dirt road near the INL UAV airstrip. Six inert A-15 anti-tank landmines were buried on the road between six and eight inches (15–20 centimeters) deep.12 Sixteen runs were conducted with no obstacles on the lane and 10 runs had various obstacles scattered on the lane such as boxes and crates as well as sagebrush and tumbleweeds.

Procedure. The robot was prepared for operation at the beginning of each trial. Each trial consisted of the operator tasking the robot to the starting point of the lane, initiating a brief mine sensor-calibration behavior, and then initiating the 1.5 meter (5-foot) wide lane-sweeping behavior. Altogether, this included a total of three button clicks on the operator control unit. Since the repeated use of colored dye would produce confusion regarding the marks on the ground, water was used instead of dye throughout the experiment.

Figure 5: Example of mine marking.

As the robot proceeded, test personnel following the robot placed red poker chips at the location of each wet spray mark. These poker chips allowed personnel to accurately measure distance from the center of the dye spray to the center of mine as shown in Figure 5. The water then dried before the next trial. Throughout the experiment, all mine locations reported to the operator control unit were checked and a copy of the data log and screenshot of the markings from the OCU were saved. A photograph of each mine and its location was taken, and a video of each run was recorded. Data sheets recorded meteorological data, mine marking errors, missed mines, false detections and other comments from those conducting the experiment. After the robot had completed its mission, the operator drove it back to the starting point. At the conclusion of the trial, the distance from each mine mark to the center of the mine was measured and recorded.

Results. There were four criteria to the tested requirements in this experiment: finding mines, marking mines, reporting mines and marking the proofed lanes. During the 26 runs executed through the experiment the robot correctly detected 124 out of 131 mines (95 percent). Of the seven mines not detected two were due to a miscalibration of the height of the sensor at the beginning of the run, two were due to low battery levels on the mine sensor, and three were not detected during sharp turns to avoid obstacles. All missed mines were at or near the edge of the proofed lane. Autonomous Robotic Countermine System had a single false positive detection during all the runs. One mine was detected and reported twice, once on the leading edge of the mine and once on the trailing edge. This gives a false detection rate of less than 1 percent.

All of the mines detected by ARCM System were physically marked on the ground. The distance between the center of the physical mark and the center of the mine was measured for 91 mines and the average marking error was 12.67 centimeters (4.98 inches) with a standard deviation of 8.56 centimeters.13 For each of the trials, the lane boundaries were marked in the physical and digital environments as shown in Figures 6 and 7. Of the 124 mines detected only one mine was not digitally reported to the operator control unit, the remainder were automatically reported and logged. A text file with the GPS coordinates of each mine was logged in a separate run file, and screen shots of each run were made showing the location of each mine within the robot's terrain map.

Figure 6: Proofed Lane marking.

Figure 7: Mine and lane display on the interface (OCU).

The ARCM System was successful in all runs in autonomously negotiating the 50-meter (55-yard) course and marking a proofed one-meter lane. This was true even when the lane was covered by a high density of clutter, including brush, boxes and large stones. The 26 runs had an average completion time of 5.75 minutes with a 99 percent confidence interval of +/- 0.31 minutes. The maximum time taken for any run was 6.367 minutes. Another interesting finding is that the average level of human input throughout the countermine exercises was less than 2 percent when calculated based on time. The U.S. Army Test and Evaluation Command indicated that the ARCM System achieved "very high levels of collaborative tactical behaviors." When the MANSCEN applied the "autonomy levels for unmanned systems" rubric, which includes indices for operator interaction, environmental difficulty and task complexity, to evaluate the overall autonomy of the system, a level of eight to nine was applied out of a possible 10.


The research reported here indicates that operational success was possible only with a mixed-initiative approach that defined different, complementary responsibilities and roles for the human, air vehicle and ground vehicle. The results of the real-world experiment showed that the autonomous robot countermine system accurately marked, both physically and digitally, 124 out of 131 buried mines in an average time of less than six minutes. Comparing the robot to a human performance baseline is difficult and no attempt was made to perform a rigorous comparison. In order to provide a rough juxtaposition, consider that, according to MANSCEN, it would take approximately 25 minutes for a trained deminer to complete the same task accomplished by the robot. In terms of probability of detection, a trained deminer detecting mines on a 50-meter (55-yard) training lane can expect to discover between 60 percent and 90 percent of the mines, depending on experience and the type of landmine to be detected.

Future Work

While the robot's performance is encouraging, it is important to understand that many challenges remain. One important caveat to the work reported here is that the mines used had a high metallic content. The need to find low-metallic mines will require a more advanced sensor. Ongoing collaboration with the Night Vision and Electronic Sensor Directorate at Fort Belvoir will result in a combined ground penetrating radar and electromagnetic induction sensor, which is being used in the next phase of this effort to improve mine sensing of low-metallic mines.

Another important caveat is that the robot platform used for the effort reported here does not meet the need for ruggedness or for all-terrain mobility. The data presented here are the results of a test on an "unimproved dirt path." To accomplish the same task in cross-country terrain is also a subject of future work. Finally, the U.S. Army Engineer School indicated that the next phase of research should support a vertical float feature to maintain an exact height of the sensor head above the ground and that they would like to see more collaborative UAV functions including the ability to coordinate multiple UAVs and UGVs from a single controller.

The Second Phase

Each of these areas for future work is currently being addressed by the second phase of this research, which is currently underway with funding from the United States Office of the Undersecretary of Defense Joint Ground Robotics Enterprise. The Robotic Systems Joint Program Office at Redstone Arsenal in Huntsville, Alabama, is providing programmatic direction for the effort called Autonomous Robotic Countermine Capability. As with the research reported in this article, a primary goal of ARC2 is to insure seamless portability of the software behaviors in terms of moving the code between robots. Since the experiment reported here was conducted, the software behavior architecture used has been ported to several different vehicles including the iRobot Packbot, Foster-Miller Talon and the Segway RMP400.

In addition to platform portability, ARC2 is also focused on insuring that the behaviors support reconfiguration for different countermine sensing payloads. Work is underway to utilize several different countermine sensors. Representatives from the U.S. Army Engineer School have requested that the Cyterra AN/PSS-14 mine sensor be utilized for the next phase of evaluation. In terms of platforms, the combat engineers have asked that the behaviors be tested on fielded systems. The ARC2 project will utilize the Foster-Miller Talon and the iRobot Packbot to develop and assess the new capabilities. Figure 8 shows the Cyterra AN/PSS-14 mine sensor together with the iRobot Packbot. The team is also working to utilize and empirically assess the Niitek sensor being developed under the Night Vision Laboratory and Electronic Sensor Directorate's Advanced Mine Detection Sensor Program. According to experimentation performed by the NVESD, this sensor has shown the greatest potential to increase probability of detection for low-metallic mines. A new effort under the direction of the Program Manager for Countermine and Explosive Ordnance Disposal at Fort Belvoir, Virginia will test how effectively the ARCM behaviors can use this sensor on the iRobot Warrior, the Remotec HD-1 and the FCS Small Unmanned Ground Vehicle (SUGV) as well as the Packbot and Talon.

Figure 8: The iRobot Packbot outfitted with the Cyterra AN/PSS-14 mine sensor.

For the next phase of research, the U.S. Army Engineer School has suggested that the resulting robotic technology be subjected to a rigorous comparison to the human baseline. To support this, MANSCEN is developing an experiment plan that will compare humans and robots side by side on the same lanes with consideration to probability of detection, false positives, user workload, overall operational manpower requirements and speed. Autonomy and user tasking remain a major focus, and several of the experiment plan questions are focused on understanding how human input can affect overall performance throughout the mission. In addition to the planned final experiment at Fort Leonard Wood, the ARC2 effort will include experiments to evaluate rigorously the performance benefits associated with the use of a priori terrain data and the use of live aerial imagery from a UAV.

The research presented here can be adapted from the military arena for relevancy to the challenges of humanitarian demining. It is important to note that humanitarian demining is significantly different from military demining. Antonic, Ban and Zagar point out that: "The military needs to breach a narrow path through the minefield as fast as possible and with acceptable losses due to missed mines. On the other hand, humanitarian demining requires 100 percent detection and removal of all mines on a large area."2

To address the challenges of humanitarian demining, a multi-robot approach is being developed which will use multiple, inexpensive platforms that can provide peer validation to increase the probability of detection. The multi-robot strategy will also allow the behaviors to be used for larger areas.

Another consideration for humanitarian demining is the price of the robotic platforms. To reduce the cost of the system, the behaviors presented in this article have now been ported to a commercial four-wheeled robot manufactured by Segway that costs less than a third of the fielded military systems under consideration. As different robots and sensors become available, the portability and reconfigurability of the behaviors will allow them to be used across a variety of tasks and environments. Bullet

This work could not have been successful without the help of many individuals and organizations. In particular the authors would like to thank Mark McKay, Matthew Anderson, Jodie Boyce, Warren Jones and Scott Bauer from the INL UAV team; Dr. Herman Herman and Jeff McMahill at the National Robotics Engineering Consortium at Carnegie Mellon University; Aaron Burmeister, Bart Everett, and Estrellina Pacis at the Space and Naval Warfare Systems Center in San Diego; Dr. Kurt Konolige and Dr. Regis Vincent at Stanford Research Institute; David Knichel, David Comella, Walton Dickson, Major. Roger Owen, and Major Scott Werkmeister at the MANSCEN Futures Center; Clay Thompson at the U.S. Army Test and Evaluation Command; Cliff Hudson and Ellen Purdy at the Joint Ground Robotics Enterprise; Colonel Terry Griffin at the Robotic Systems Joint Program Office; Eloisa Lara at the Night Vision and Electronic Sensors Directorate; Lieutenant Colonel Raymond "Butch" Boyd and John Hegle at the U.S. Army Engineer School.


David J. Bruemmer currently leads unmanned ground vehicle research at the Idaho National Lab focusing on providing intelligent autonomy for applications including remote characterization of high-radiation environments, mine sweeping, military reconnaissance and search-and-rescue operations. Bruemmer was selected as an Inventive Young Engineer by the National Academy of Engineers and is a recipient of a 2006 R&D 100 Award.

Douglas Few obtained a B.S. in computer science from Keene State College in 2003. Currently Few is a Principal Research Scientist at the Idaho National Laboratory. Few's interests include human-robot interactions, and mixed-autonomy robotic systems.

Curtis W. Nielsen completed a Ph.D. in computer science with an emphasis in human-robot interaction at Brigham Young University in 2006. He subsequently joined the Idaho National Laboratory as a Principal Research Scientist in the Robotics and Human Systems group. Nielsen is one of the recipients of a 2006 R&D 100 Award for developing a Robot Intelligence Kernel.

Miles Walton has been a Control Systems Software Engineer at the Idaho National Laboratory for 27 years. Now in the Robotics and Human Systems department, he has been on teams recognized at state, regional and national levels for the development of robot intelligence systems and human/machine interfaces. Walton received a Bachelor of Science in computer science from Brigham Young University in 1980.


  1. Accessed October 31, 2007.
  2. Davor Antonic, Zeljko Ban, and Mario Zagar. "Demining robots - requirements and constraints." Automatika, 42(3-4), 2001.
  3. J.D. Nicoud and M.K. Habib. The pemex-b autonomous demining robot: perception and navigation strategies. Proceedings of intelligent robots and systems, pages 419–424, Pittsburgh, PA, 1995.
  4. 4 J.-D. Nicoud. Vehicles and robots for humanitarian demining. Industrial Robot: An International Journal, 24(2):164–168, April 1997.
  5. Miles C. Walton Douglas A. Few, David J. Bruemmer. Improved human-robot teaming through facilitated initiative. Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, United Kingdom, September 2006.
  6. Kurt Konolige. Large-scale map-making. Proceedings of the National Conference on AI (AAAI), San Jose,CA, 2004.
  7. Fong, T., Thorpe, C. & Baur, C. (2001 ) Collaboration, dialogue,and human robot interaction. 10th International Symposium of Robotics Research, Lourne, Victoria, Australia, November
  8. Kidd, P.T. (1992) Design of human-centered robotic systems. In Mansour Rahimi and Waldemar Karwowski, editors, Human Robot Interaction, pages 225–241. Taylor and Francis, London, England
  9. Scholtz, J. & Bahrami, S. (2003) Human-robot interaction: development of an evaluation methodology for the bystander role of interaction. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, volume 4, pages 3212–3217
  10. Sheridan, T. B. (1992) Telerobotics, automation, and human supervisory control. MIT Press, Cambridge, MA.
  11. Curtis W. Nielsen and Michael A. Goodrich. Comparing the usefulness of video and map information in navigation tasks. Proceedings of the 2006 Human-Robot Interaction Conference, Salt Lake City, UT, 2006.
  12. Note that six landmines in a 50 meter section is considered a high mine concentration.
  13. The mines have a diameter of 33.4 cm

Contact Information

David J. Bruemmer
Robotic and Human Systems Department
Idaho National Laboratory
Idaho Falls, ID 83415
Tel: +1 208 526 4078
Fax: +1 208 526 7688

Douglas A. Few
Idaho National Laboratory
Tel: +1 208 351 7560

Curtis W. Nielsen
Idaho National Laboratory
Tel: +1 208 526 8659

Miles C. Walton
Idaho National Laboratory
Tel: +1 208 526 3087