SlideShare a Scribd company logo
Dependable Low-altitude Obstacle Avoidance
for Robotic Helicopters Operating in Rural Areas
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
Torsten Merz and Farid Kendoul
Autonomous Systems Laboratory, CSIRO ICT Centre, 1 Technology Court, Pullenvale, Queensland 4069, Australia
e-mail: torsten.merz@csiro.au, farid.kendoul@csiro.au
Received 10 July 2012; accepted 15 February 2013
This paper presents a system enabling robotic helicopters to fly safely without user interaction at low altitude
over unknown terrain with static obstacles. The system includes a novel reactive behavior-based method
that guides rotorcraft reliably to specified locations in sparsely occupied environments. System dependability
is, among other things, achieved by utilizing proven system components in a component-based design and
incorporating safety margins and safety modes. Obstacle and terrain detection is based on a vertically mounted
off-the-shelf two-dimensional LIDAR system. We introduce two flight modes, pirouette descent and waggle
cruise, which extend the field of view of the sensor by yawing the aircraft. The two flight modes ensure that
all obstacles above a minimum size are detected in the direction of travel. The proposed system is designed
for robotic helicopters with velocity and yaw control inputs and a navigation system that provides position,
velocity, and attitude information. It is cost effective and can be easily implemented on a variety of helicopters
of different sizes. We provide sufficient detail to facilitate the implementation on single-rotor helicopters with a
rotor diameter of approximately 1.8 m. The system was extensively flight-tested in different real-world scenarios
in Queensland, Australia. The tests included flights beyond visual range without a backup pilot. Experimental
results show that it is feasible to perform dependable autonomous flight using simple but effective methods.
C 2013 Wiley Periodicals, Inc.
1. INTRODUCTION
In airborne remote sensing, flights are conducted at low alti-
tude or close to obstacles if sensors must be placed at a short
distance from objects of interest due to, among other things,
limited spatial sensor resolution, limited sensing range, oc-
clusion, or atmospheric disturbance. Applications of air-
borne remote sensing in rural areas include crop monitoring
and inspection of farm infrastructure. In addition to require-
ments in remote sensing, operating unmanned aircraft close
to terrain and obstacles decreases the risk of collisions with
manned aircraft, which usually operate at higher altitude
and clear of obstacles.
Low-altitude flights close to obstacles are performed
more easily with rotorcraft than with fixed-wing aircraft
due to their ability to fly at any low speed. Unmanned air-
craft are attractive because using manned aircraft is often
more expensive and hazardous for such operations. While
smaller unmanned rotorcraft such as electric multirotors
are sufficient for some applications, often larger aircraft
are required for traveling longer distances and carrying
heavier sensors. However, operations of larger unmanned
helicopters are currently constrained by the requirement for
Direct correspondence to: Torsten Merz e-mail: torsten.merz@
csiro.au.
skilled and possibly certified pilots and reliable communi-
cation links, especially for operations beyond visual range
in unknown environments.
This paper presents the LAOA (low-altitude obstacle
avoidance) system. Its goal is to guide a robotic helicopter
such that it arrives at a specified location without human
interaction and without causing damage to the environ-
ment or the aircraft. There is no requirement regarding the
trajectory. Safety is an important system requirement as es-
pecially larger helicopters may be hazardous. In addition to
being safe, the system should reliably guide the helicopter
to a specified location. Safety and reliability are both at-
tributes of dependability. System dependability has been
the primary requirement of our work. System performance
in terms of minimal task execution time or minimal travel
distance has been a secondary requirement. In addition to
being dependability, we aimed for a cost-effective, generic
system that can be implemented in a relatively short time.
The LAOA system enables safe autonomous operations
of robotic helicopters in environments with an unknown
terrain profile and unknown obstacles under the following
assumptions:
(1) there is no other aircraft operating in the area and ob-
stacles are static,
(2) there are no overhead obstacles,
Journal of Field Robotics, 1–33 C 2013 Wiley Periodicals, Inc.
View this article online at wileyonlinelibrary.com • DOI: 10.1002/rob.21455
2 • Journal of Field Robotics—2013
Inspection camera
Flight and navigation computers
2D LIDAR system
Figure 1. One of CSIRO’s unmanned helicopters with an inte-
grated LAOA system. The helicopter is configured for inspec-
tions of vertical structures.
(3) there are no obstacles smaller or thinner than the system
can detect at the minimum stopping distance, and
(4) the base helicopter system is serviceable and operated
within specified weather limitations.
We assume the base helicopter system includes a con-
trol and navigation system as specified in this paper. The
LAOA system makes use of the particular flight proper-
ties of helicopters. It is designed for a variety of helicopters
of different sizes. We have tested it on a small unmanned
single-rotor helicopter (Figure 1).
Reactive behavior-based methods have been success-
fully applied in many areas of robotics, but their potential
has not been explored much for robotic helicopters per-
forming real-world tasks. Given our task specification, we
decided it was worthwhile investigating. For the obstacle
avoidance part, a reactive navigation approach without a
global planning component was chosen because (1) suffi-
ciently accurate and current maps are often not available
for the mission areas, (2) mapping and planning during the
flight does not necessarily increase efficiency of mission ex-
ecution in rural areas,1
and (3) a reactive approach helps
to reduce computational resources required for real-time
implementation. Range information from LIDAR (light de-
tection and ranging, also LADAR) is used to create stimuli
for the reactive system and for height control during ter-
rain following. We decided to utilize LIDAR technology be-
cause from our experience with different sensing options, a
LIDAR-based approach was likely to give us the best results
for terrain and obstacle detection.
1The number of obstacles encountered during remote sensing mis-
sions in rural areas is assumed to be relatively low.
Our main contributions are (1) a novel terrain and ob-
stacle detection method for helicopters using a single off-
the-shelf two-dimensional (2D) LIDAR system and yaw mo-
tion of the helicopter to extend its field of view; (2) a novel
computationally efficient, reactive behavior-based method
for goal-oriented obstacle and terrain avoidance optimized
for rotorcraft; (3) details of the implementation on a small
unmanned helicopter and results of extensive flight testing;
(4) evidence that it is feasible to conduct autonomous goal-
oriented low-altitude flights dependably with simple but
effective methods. The proposed methods are particularly
suitable for approaching vertical structures at low altitude.
All experiments were conducted in unstructured unknown
outdoor environments.
The paper is structured as follows: In the next sec-
tion we discuss related work. Section 3 provides a system
overview. The methods for detecting terrain and obstacles
are described in Section 4. Section 5 provides a description
of the flight modes of the LAOA system. The strategies for
goal-oriented obstacle avoidance are described in Section 6.
Experimental results of the field-tested system are pro-
vided in Section 7. Section 8 concludes with a summary
including system limitations and future work. Nomencla-
ture and technical details about the implemented system
can be found in the Appendix.
2. RELATED WORK
Developing autonomous rotorcraft systems with onboard
terrain-following and obstacle-avoidance capabilities is an
active research area. A variety of approaches to the terrain-
following and obstacle-avoidance problems exist, and many
papers have been published. The recent survey (Kendoul,
2012) provides a comprehensive survey of rotary-wing un-
manned aircraft system (UAS) research, including terrain
following and obstacle detection and avoidance. In this
section, we briefly review related work and present ma-
jor achievements and remaining challenges in these areas of
research.
2.1. Sensing Technologies for Obstacle and
Terrain Detection
The sensing technologies commonly used onboard un-
manned aircraft are computer vision (passive sensing) and
LIDAR (active sensing). Cameras or electro-optics sensors
are a popular approach for environment sensing because
they are light, passive, compact, and provide rich infor-
mation about the aircraft’s self-motion and its surrounding
environment. Different types of imaging sensors have been
used to address the mapping and obstacle-detection prob-
lems. Stereo imaging systems have the advantage of pro-
viding depth images and ranges to obstacles and have been
used on rotary-wing UASs for obstacle detection and map-
ping (Andert and Adolf, 2009; Byrne et al., 2006; Hrabar,
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 3
2012; Theodore et al., 2006). Monocular vision (single cam-
era) has also been used as the main perception sensor in
different projects (Andert et al., 2010; Montgomery et al.,
2006; Sanfourche et al., 2009). Recently, optic flow sensors
have emerged as an alternative sensing technology for ob-
stacle detection and avoidance onboard small and mini un-
manned aircraft (Beyeler et al., 2009; Ruffier and Frances-
chini, 2005; William et al., 2008). Some researchers have also
investigated the use of wide field-of-view imaging systems
such as fisheye and omnidirectional cameras for obstacle de-
tection indoors (Conroy et al., 2009) and outdoors (Hrabar
and Sukhatme, 2009). The drawbacks of vision-based ap-
proaches are their sensitivity to ambient light and scene tex-
ture. Furthermore, the complexity of image-processing al-
gorithms makes a real-time implementation on low-power
embedded computers challenging.
LIDAR is a suitable technology for mapping and obsta-
cle detection since it directly measures the range by scanning
a laser beam in the environment and measuring distance
through time-of-flight or interference. LIDAR systems out-
perform vision systems in terms of accuracy and robustness
to ambient lighting and scene texture. Furthermore, map-
ping the environment and detecting obstacles from LIDAR
range data is less complex than doing so from intensity
images. Indeed, most successful results and major achieve-
ments in obstacle field navigation for unmanned rotorcraft
have been achieved using LIDAR systems (He et al., 2010;
Scherer et al., 2008; Shim et al., 2006; Tsenkov et al., 2008).
Despite the numerous benefits of LIDAR systems, they suf-
fer from some problems. They are generally heavier than
cameras and require more power (active sensors), which
make their integration in smaller aircraft with limited pay-
load challenging. LIDAR systems are also sensitive to some
environmental conditions such as rain and dust, and they
can be blinded by sun. The main drawback of off-the-shelf
LIDAR systems is their limited field of view. Indeed, most
commercially available LIDAR systems only perform line
scans. For 3D navigation, these 2D LIDAR systems have
been mounted on nodding or rotating mechanisms when
used on rotorcraft (Scherer et al., 2012; Takahashi et al.,
2008). A few compact 3D LIDAR systems exist but they
are not commercially available, such as the one from Fib-
erteck Inc. (Scherer et al., 2008), or they are very expensive
and heavy, such as the Velodyne 3D LIDAR system.
Flash LIDAR cameras or 3D time-of-flight (TOF) cam-
eras are a promising emerging 3D sensing technology that
will certainly increase the perception capabilities of robots.
Unlike traditional LIDAR systems that scan a collimated
laser beam over the scene, Flash LIDAR cameras illumi-
nate the entire scene with diffuse laser light and compute
time-of-flight for every pixel in an imager, thereby result-
ing in a dense 3D depth image. Recently, several companies
started offering Flash LIDAR cameras commercially, such
as the SwissRanger SR4000 (510 g) from MESA Imaging AG,
Canesta 3D cameras, TigerEye 3D Flash LIDAR (1.6 kg) from
Advanced Scientific Concepts Inc., the Ball Aerospace 5th Gen-
eration Flash LIDAR, etc. However, they are either heavy
and expensive or small but with very limited range (10 m
for SwissRanger SR4000) and often not suitable in outdoor
environments.
Radar is the sensor of choice for long-range collision
and obstacle detection in larger aircraft. Radar provides
near-all-weather broad area imagery. However, for integra-
tion in smaller unmanned aircraft, most radar systems are
less suitable due to their size, weight, and power consump-
tion. Moreover, they are quite expensive. There are a few
smaller radar systems such as the Miniature Radar Altime-
ter (MRA) Type 1 from Roke Manor Research Ltd., which
weighs only 400 g and has a range of 700 m. We are not aware
of any work by an academic research group on the use of
radar onboard unmanned rotorcraft for obstacle and colli-
sion avoidance. In Viquerat et al. (2007), work was reported
using radar onboard a fixed-wing unmanned aircraft. The
use of other ranging sensors such as ultrasonic and infrared
sensors has been limited to a few indoor flights or ground
detection during autonomous landing.
Since most of these sensing technologies did not meet
our requirement of developing a dependable and cost-
effective obstacle-avoidance system for an unmanned heli-
copter in a relatively short time, we based our system upon
an already proven small-sized 2D LIDAR system. For the
reasons we discuss in Section 3, we decided to use the mo-
tion of the helicopter itself to extend the field of view of
the LIDAR system for 3D perception rather than using a
nodding or rotating mechanism.
2.2. Pirouette Descent and Waggle Cruise Flight
Modes
One of the main contributions of our work is the intro-
duction of two special flight modes (pirouette descent and
waggle cruise) for extending the field of view of the 2D LI-
DAR system as described in Section 5. We have not found
a description of similar flight modes in the literature except
for the work presented in Dauer et al. (2011). Indeed, re-
searchers from the German Aerospace Center (DLR) have
considered the problem of flying a linear path while con-
stantly changing the helicopter heading or yaw to aim the
mission sensor (e.g., camera) on its target. They have pro-
posed a quaternion-based approach for attitude command
generation and control for a helicopter UAS, and they eval-
uated its performance in a hardware-in-the-loop (HIL) sim-
ulation environment for an elliptic turn maneuver (similar
to the waggle cruise flight mode described in Section 5.7).
Although the approach presented in Dauer et al. (2011) re-
sults in better tracking accuracy, especially for relatively
high forward speeds and high yawing rates, the control
approach used in our work resulted in satisfactory results
for the intended application. This is because of the low for-
ward speed of the helicopter which is mainly imposed by
Journal of Field Robotics DOI 10.1002/rob
4 • Journal of Field Robotics—2013
the limited sensing range of the LIDAR system we have
been using.
2.3. Terrain-following Systems
A closed-loop terrain following system allows aircraft to
automatically maintain a relatively constant altitude above
ground level. This technology is primarily used by mili-
tary aircraft during nap-of-the-earth (NOE) flight to take
advantage of terrain masking and avoid detection by en-
emy radar systems. However, terrain following is also a
useful capability for civilian UASs. For example, for low-
altitude remote sensing flights with fixed focal cameras, it
is often required to capture images of objects on the ground
with constant resolution. Furthermore, terrain following
is a useful method for approaching short vertical inspec-
tion targets such as farm windpumps at a low but safe
height.
As for obstacle avoidance, terrain following can be
achieved with reactive or mapping-based approaches us-
ing passive (e.g., vision) or active sensors (e.g., LIDAR).
Bio-inspired optic flow methods have been investigated for
terrain following and were demonstrated only on small in-
door rotorcraft such as quadrotors (Herisse et al., 2010) and
a 100 g tethered rotorcraft (Ruffier and Franceschini, 2005).
An interesting result on outdoor terrain following the use
of optic flow has been reported in Garratt and Chahl (2008).
The developed system has been implemented onboard a
Yamaha RMAX helicopter, and allowed it to maintain 1.27 m
clearance from the ground at a speed of 5 m/s, using height
estimates from optic flow and GPS velocities.
The LIDAR-based localization and mapping system,
developed by MIT researchers (Bachrach et al., 2009) for au-
tonomous indoor navigation of a small quadrotor in GPS-
denied environments, included a component related to ter-
rain following. Some of the beams of a horizontally mounted
Hokuyo LIDAR system were deflected down to estimate
and control height above the ground plane.
Reactive terrain following provides computationally
efficient ground detection and avoidance capabilities with-
out the need for mapping and planning. However, the ap-
proach has some limitations. With only limited knowledge
of the terrain profile ahead, sensing limitations of the ob-
stacle detection system, and a limited flight envelope, the
maximum safe speed of the aircraft is generally lower com-
pared to an approach that can make use of maps. The flight
path may also suffer from unnecessary aggressive height
changes when flying above nonsmooth terrain with large
variation and discontinuities in its profile.
An alternative to reactive terrain following is ground
detection and avoidance through mapping and path plan-
ning (Scherer et al., 2008; Tsenkov et al., 2008). These ap-
proaches are more general than reactive terrain-following
methods because they are able to perform terrain following
as well as low-altitude flights without the need to maintain
a constant height above the ground. However, they require
accurate position estimates relative to the reference frame of
an accurate map, and they are complex and generally com-
putationally more expensive than the reactive methods. We
show that the reactive method we propose performs well
for the operations we envisage.
2.4. Obstacle-avoidance Systems and Algorithms
A variety of approaches to the obstacle-avoidance prob-
lem onboard unmanned rotorcraft exist. They can be classi-
fied into two main categories: SMAP-based approaches and
SMAP-less techniques (Kendoul, 2012). In the SMAP (simul-
taneous mapping and planning) framework, mapping and
planning are jointly performed to build a map of the envi-
ronment, which is then used for path planning. SMAP-less
obstacle avoidance strategies are generally reactive without
the need for a map or a global path-planning algorithm.
2.4.1. SMAP-less Approaches
SMAP-less techniques aim at performing navigation and
obstacle avoidance with reactive methods without mapping
and global path planning. Reactive obstacle detection and
avoidance algorithms operate in a timely fashion and com-
pute one action in every instant based on the current context.
They use immediate measurements of the obstacle field to
take a reactive response to prevent last-minute collisions
by stopping or swerving the vehicle when an obstacle is
known to be in the trajectory. However, it is often difficult
to prove completeness of reactive algorithms for reaching a
goal (if a path exists), especially for systems with uncertain-
ties in perception and control. Completeness proofs exist for
some algorithms such as the Bug2 algorithm (Choset et al.,
2005), which is similar to our second avoidance strategy we
describe in Section 6.
In the literature, most SMAP-less approaches are
vision-based, where obstacles are detected and avoided
using optic flow (Beyeler et al., 2009; Conroy et al., 2009;
Hrabar and Sukhatme, 2009; William et al., 2008; Zufferey
and Floreano, 2006) or a priori knowledge of some character-
istics of the obstacle such as color and shape (Andert et al.,
2010).
Bio-inspired methods that use optic flow have been
popular because of their simplicity and the low weight
of the required hardware. This is an active research area
that can advance the state of the art in rotary-wing UAS
3D navigation. Promising and successful results have al-
ready been obtained when using optic flow for obstacle
avoidance [Zufferey and Floreano (2006), indoor 30 g fixed-
wing UAS; William et al. (2008), indoor MAV; Hrabar and
Sukhatme (2009), outdoor Bergen helicopter; Beyeler et al.
(2009), outdoor fixed-wing UAS; and Conroy et al. (2009),
indoor quadrotor]. These methods are very powerful and
provide an interesting alternative for both perception and
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 5
guidance onboard mini UAS with limited payload. How-
ever, the problem of robust optic-flow computation in real
time and obstacle detection in natural environments is still
a challenge and an open research area.
Reactive obstacle avoidance based on a priori knowl-
edge of some characteristics of the obstacle has been ap-
plied in some projects. In Andert et al. (2010), for example,
DLR researchers have developed a vision-based obstacle-
avoidance system that allows a small helicopter to fly
through gates that are identified by colored flags. This sys-
tem was demonstrated in real time using a 12 kg robotic
helicopter that crossed autonomously gates of 6 m × 6 m
at a speed of 1.5 m/s without collisions. The system pre-
sented in Hrabar and Sukhatme (2009) includes a forward-
facing stereo camera to detect frontal obstacles. Based on
a 3D point cloud, obstacles are detected in the upper half
of the image using some distance threshold and region-
growing algorithm. Once the obstacles have been detected,
an appropriate evasive control command (turn away, stop)
is generated.
LIDAR systems have also been used for reactive obsta-
cle avoidance onboard rotary-wing and fixed-wing UASs.
Scherer et al. (2008) have developed a reactive obstacle
avoidance system or local path planner that is based on
a model of obstacle avoidance by humans. Their reac-
tive system uses 3D LIDAR data expressed in the aircraft-
centric spherical coordinates, and it can be combined with
a global path planner. They have demonstrated results on
a RMAX helicopter operating at low altitudes and in dif-
ferent environments. In a recent work, Johnson et al. (2011)
developed and flight-tested two reactive methods for ob-
stacle and terrain avoidance to support nap-of-the-earth
helicopter flight. The first method is similar to ours in the
sense that it is based on a simple processing of each LI-
DAR scan, whereas the second one employs the potential
field technique. LIDAR-based reactive obstacle avoidance
has been also applied for a small fixed-wing UAS such as the
Brigham Young University (BYU) platform (Griffiths et al.,
2006).
One of the motivations of our work was to investigate
the potential and effectiveness of using reactive obstacle-
avoidance systems for achieving real-world applications in
natural unknown environments without the need for map-
ping and global path-planning algorithms. SMAP-less tech-
niques are attractive because of their simplicity and real-
time capabilities. However, reactive methods are prone to
being incomplete (no path to the goal is found) and ineffi-
cient in natural environments. The methods we propose re-
liably guide the helicopter to a specified point and employ
heuristics to cope with inefficiency and the local minima
problem.
The basic ideas of our work have been presented at the
IEEE/RSJ International Conference on Intelligent Robots
and Systems in 2011 (Merz and Kendoul, 2011). In compar-
ison to the conference paper, this paper provides a more
detailed description of the system, its underlying methods,
and the experiments we conducted. The level of detail is suf-
ficient to facilitate the implementation on helicopters similar
to the one used for our experiments. Moreover, we provide
experimental results that have not been published in the
conference paper.
2.4.2. SMAP-based Approaches
SMAP-based approaches have been proven to be effective
and efficient for dealing with obstacles in many types of un-
known environments. However, there are environments in
which a reactive method would perform equally well if not
better as no maps need to be built. Moreover, SMAP-based
approaches are computationally expensive, especially those
based on computer vision.
Although many papers have been published on vision-
based obstacle avoidance for rotorcraft, very few systems
have been implemented on an actual aircraft, and modest
experimental results have been reported in the literature.
In Andert and Adolf (2009), stereo vision has been used to
build a world representation that combines occupancy grids
and polygonal features. Experimental results on mapping
are presented in the paper, but there are no results about
path planning and obstacle avoidance. A similar system
was described in Meier et al. (2012), where stereo vision
was used for mapping and obstacle avoidance onboard a
small quadrotor UAS. Another stereo vision-based system
for rotorcraft is described in Byrne et al. (2006). It com-
bines block-matching stereo (depth image) with image seg-
mentation based on graph representation appropriate for
obstacle detection. This system was demonstrated in real
time using Georgia Tech’s Yamaha RMAX helicopter. In San-
fourche et al. (2009) and Montgomery et al. (2006), monoc-
ular stereo vision has been used to map the terrain and to
select a safe landing area for an unmanned helicopter. From
the reviewed literature, we found that there are no suc-
cessful implementations of vision-based methods onboard
rotary-wing UASs for obstacle avoidance using the SMAP
framework.
Major significant achievements in 3D navigation and
obstacle avoidance by unmanned rotorcraft have been ob-
tained using LIDAR systems and a SMAP-based approach.
Most successful implementations on rotary-wing UASs are
probably the ones by CMU (Scherer et al., 2008), U.S.
army/NASA (Tsenkov et al., 2008; Whalley et al., 2009),
Berkeley University (Shim et al., 2006), and MIT (Bachrach
et al., 2009). Some experimental results on using LIDAR sys-
tems onboard an unmanned helicopter for obstacle avoid-
ance were reported in Shim et al. (2006), where the BEAR
team developed a 2D obstacle-avoidance algorithm that
combines local obstacle maps for perception and a hier-
archical model predictive controller for path planning and
flight control. Equipped with this system, a Yamaha R-50 he-
licopter was able to detect 3 m × 3 m canopies (simulating
Journal of Field Robotics DOI 10.1002/rob
6 • Journal of Field Robotics—2013
urban obstacles), to plan its path, and to fly around obstacles
to reach the goal waypoint at a nominal speed of 2 m/s.
In Scherer et al. (2008), researchers from CMU have
addressed the problem of flying relatively fast at low
altitudes in cluttered environments relying on online
LIDAR-based sensing and mapping. Their approach com-
bines a slower 3D global path planner that continuously
replans the path to the goal based on the perceived envi-
ronment with a faster 3D local collision avoidance algo-
rithm that ensures that the vehicle stays safe. A custom
3D LIDAR system from Fibertek Inc. was integrated into
a Yamaha RMAX helicopter and used as the main percep-
tion sensor. The system has been extensively flight-tested
in different sites with different obstacles and flight speeds.
More than 700 successful obstacle-avoidance runs were per-
formed where the helicopter avoided autonomously build-
ings, trees, and thin wires.
Other impressive results for rotary-wing UAS SMAP-
based obstacle avoidance are reported in Whalley et al.
(2009). The U.S. Army/NASA rotorcraft division has de-
veloped an Obstacle Field Navigation (OFN) system for
rotary-wing UASs low-altitude flights in urban environ-
ments. A SICK LIDAR system was mounted on a spinning
mechanism and used to generate 3D maps (obstacle prox-
imity map and grid height map) of the environment. Two
3D path-planning algorithms have been proposed using a
2D A* grid-search algorithm on map slices for the first one,
and a 3D A* algorithm on a height map (Tsenkov et al.,
2008). This OFN system has been implemented on a Yamaha
RMAX helicopter and demonstrated in a number of real
obstacle-avoidance scenarios and environments. More than
125 flight tests were conducted at different sites to avoid nat-
ural and man-made obstacles at speeds that ranged from 1
to 4 m/s.
While previous systems have been developed mainly
for outdoor navigation and unmanned helicopters, the sys-
tem presented in Bachrach et al. (2009) and Bachrach et al.
(2011) is designed for mini rotorcraft such as quadrotors fly-
ing in indoor GPS-denied environments. The proposed sys-
tem is based on stereo vision and the Hokuyu LIDAR system
for localization and mapping. When this navigation system
was used with a planning and exploration algorithm, the
quadrotor was able to autonomously navigate (motion esti-
mation, mapping, and planning) in open lobbies, cluttered
environments, and office hallway environments. Another
LIDAR-based mapping system for small multirotor UASs
is presented in Scherer et al. (2012). It is based on an off-axis
spinning Hokuyu LIDAR system that is used to create a 3D
evidence grid of the riverine environment. Experimental re-
sults along a 2 km loop of river using a surrogate perception
payload on a manned boat are presented.
For a comprehensive literature review of SMAP-based
approaches and environment mapping for UAS naviga-
tion, we refer the motivated reader to the survey papers
of Kendoul (2012) and Sanfourche et al. (2012).
2.4.3. Other Approaches
In Zelenka et al. (1996), the authors present results of
semiautonomous nap-of-the-earth flights of full-sized he-
licopters. The research included vision, radar, and LIDAR-
based approaches for terrain and obstacle detection and
different avoidance methods using information from the
detection system and a terrain database.
In Hrabar (2012), the author combines stereo vision and
LIDAR for static obstacle avoidance for unmanned rotor-
craft. 3D occupancy maps are generated online using range
data from a stereo vision system, a 2D LIDAR system, and
both at the same time. A goal-oriented obstacle-avoidance
algorithm is used to check the occupancy map for poten-
tial collisions and to search for an escape point when an
obstacle is detected along the current flight trajectory. The
system has been implemented on one of CSIRO’s unmanned
helicopters. It was tested in a number of flights and scenar-
ios with a focus on evaluation and comparison for stereo
vision and LIDAR-based range sensing for obstacle avoid-
ance. However, the avoidance algorithm is prone to the local
minima problem and, as the results show, the system is not
suited for safe flights without a backup pilot.
3. SYSTEM OVERVIEW
This section provides an overview of a helicopter sys-
tem with an integrated LAOA system. We have used a
component-based design approach for both software and
hardware. Breaking down a complex system in individ-
ual components has the advantage that individual com-
ponents can be easily designed, tested, and certified. To
maximize system dependability, we have utilized existing
proven components in our design wherever possible.
The three main system components are a base heli-
copter system, a 2D LIDAR system, and the LAOA system.
The description of base helicopter systems and LIDAR sys-
tems is beyond the scope of this paper. Technical specifica-
tions of the base system components we have been using
can be found in Table V of the Appendix. The LAOA sys-
tem is designed to be generic. It can be implemented on any
robotic helicopter with velocity and yaw control inputs (see
Section 5.2) and a navigation system that provides position,
velocity, and attitude information. The system requires a
number of parameters that are specific to the aircraft, the
sensor and control system, and the environment. Most pa-
rameters are determined by geometric considerations. The
parameters we used in the experiments described in Section
7 are provided in the Appendix.
Obstacle and terrain detection is based on a 2D LI-
DAR system that is rigidly mounted on the helicopter as
described in Section 4. There are several reasons why we
chose a LIDAR-based approach: (1) LIDAR systems reli-
ably detect objects within a suitable range and with suf-
ficient resolution, (2) the systems produce very few false
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 7
Obstacle Detection
Functions
Terrain Following
pD
Functions
Obstacle Avoidance
ψ
c
h
f
df dd,
ψ
f
g
ψ
f
p
f
NE
, ,
h
hminhgnd
Helicopter Navigation System2D LIDAR System
Perception
Functions
Flight Control
Guidance&Control
goalPosition
Helicopter Control System
,v
c
xyz
goalHeight
p
f
NE
vNED , ψ,
,
i λi( )
State Machine
ψ
r ,
pNE ,
fwp
φ, θ, ψ
Terrain Detection
φ, θ, ψ
pNE
,
Figure 2. Structure of the LAOA system (see the Appendix for
nomenclature). The obstacle-avoidance functions include func-
tions for waypoint generation. The terrain-following functions
include functions for vertical height change. The flight-control
functions include functions for trajectory generation.
positive detections in weather and environments in which
we typically operate (dust-free and no rain or snowfall),
and (3) obstacle and terrain detection based on range data
requires relatively low processing effort. Compared with 3D
LIDAR systems, 2D systems are widely commercially avail-
able at a reasonable price and require fewer computational
resources.
In our approach, 3D information is obtained by using
the motion of the helicopter to extend the field of view of
the LIDAR system. We decided not to utilize a nodding or
rotating mechanism for the following reasons: (1) the mech-
anisms require extra payload capacity and electric power;
(2) it is an additional component that could fail; (3) in most
cases, a mechanism that is specific to a helicopter and a LI-
DAR system must be custom-built, and the development
of a dependable mechanism is time-consuming and expen-
sive; (3) especially on smaller aircraft, it is often difficult
to find a mounting point for a 3D LIDAR system without
obstructing the field of view of the sensor.
The structure of the LAOA system is shown in
Figure 2. The user inputs to the system are a goal posi-
tion (2D position) and a goal height (height above ground).
Both are typically provided by a waypoint sequencer that
reads predefined flight plans or plans that are generated
by a global path planner. The cruise speed is a fixed sys-
tem parameter that depends on other parameters and is not
meant to be changed by the operator (see Table VI of the
Appendix). The system is divided into a perception and a
guidance & control part. The perception part is described in
Section 4 and the guidance & control part in Sections 5 and
6. The interaction of system components is controlled by a
state machine.
State machines are a well-suited formalism for the spec-
ification and design of large and complex reactive sys-
tems (Harel, 1987). We have utilized the extended-state
machine-based software framework ESM, which also facil-
itates a component-based real-time implementation of the
proposed methods (Merz et al., 2006). The main differences
between the classical finite-state machine formalism and
ESM is the introduction of hierarchical and concurrent states
and data ports and paths for modeling data flow. The ma-
jority of the methods proposed in this paper are described
in state diagrams at a level of detail necessary to understand
the behavior of the helicopter. A brief description of a subset
of the ESM language that is used in this paper can be found
in Table IX of the Appendix.
The LAOA functions are implemented on the existing
computers of the base helicopter system. The perception
part is implemented on the navigation computer and the
guidance & control part is implemented on the flight com-
puter (see Table V of the Appendix). Both computers run a
Linux operating system with a real-time kernel patch and
the ESM run-time environment. The state machines are ex-
ecuted at 200 Hz (clock frequency of transition trigger; see
Table IX of the Appendix). All calculations of variables and
events used in the state diagrams in this paper are exe-
cuted within 0.5 ms on the specified hardware (assuming
sensor readings are available in main memory). The maxi-
mum latency for reacting on an (external) event is 5 ms and
the control functions of the LAOA system are executed at
100 Hz.
4. OBSTACLE AND TERRAIN DETECTION
Obstacle and terrain detection is based on range measure-
ments from a 2D LIDAR system and attitude estimates from
the navigation system of the helicopter. The LAOA system
is designed for 2D LIDAR systems with a scan range of
approximately 270◦
. A scan rate of approximately 40 Hz
and a scan resolution of approximately half a degree are
sufficient for operations similar to the ones described in
Section 7 with the parameters specified in Table VI of the
Appendix.2
The LIDAR system is mounted with the scan
plane parallel to the xz-plane of the helicopter body frame
and the 90◦
blind section facing backwards (see Figure 3). In
our current implementation, we assume there are no obsta-
cles above the helicopter. However, for future applications
requiring detection of overhead obstacles, the LIDAR sys-
tem is mounted with the scan area symmetrical to the x-axis
2During waggle cruise flight, the spatial scan resolution is approxi-
mately 12 cm vertically and 61 cm horizontally at safeObstacleScan-
Range distance and at highest yaw rate.
Journal of Field Robotics DOI 10.1002/rob
8 • Journal of Field Robotics—2013
z
Fitted line
x
i
Reflection point
!
ri
xB
hmin
hgnd
df
Safe obstacle scan range
detectionWindowSize
detectionWindowSize
Figure 3. Illustration of the LIDAR-based terrain and obstacle detection.
of the body frame xB rather than more downward-oriented.
A precise alignment with the reference frame of the nav-
igation system is not required. Alignment errors of a few
degrees are tolerated with the parameters specified in Table
VI of the Appendix.
LIDAR scans are synchronized with the attitude es-
timates from the navigation system. Attitude estimates
(φk, θk, ψk) are recorded at the time a sync signal is received
from the LIDAR system (k referring to the kth scan). The
sync signal indicates the completion of a scan. A LIDAR
scan is processed after the sync signal has been received.
We define a reflection point as a point in the environ-
ment where the laser beam of the LIDAR system is reflected.
A reflection point is assumed to be part of an obstacle. A
LIDAR reading (ri, λi) is a reflection point expressed in po-
lar coordinates relative to the x-axis of the body frame (see
Figure 3; i referring to the ith reading with a valid range
value). Assuming the helicopter is stationary during a scan,3
reflection points can be expressed in the leveled body frame
of the helicopter4
using the recorded attitude estimates. For
obstacle and terrain detection, only the x and z components
of reflection points are used. A 2D reflection point in the
leveled body frame is calculated as follows:
xi
zi
=
cos θk sin θk cos φk
− sin θk cos θk cos φk
ri cos λi
ri sin λi
. (1)
We define S as the set of all reflection points (xi, zi)
in a scan with a minimum distance from the sensor (ri ≥
3The parameters of the avoidance functions listed in Table VI of
the Appendix are chosen such that the error introduced by this
assumption is accommodated for.
4The leveled body frame is the helicopter-carried NED frame ro-
tated by the helicopter yaw angle around the N axis.
minimalLidarRange). Readings with a shorter range are dis-
carded as they are likely to be caused by the main rotor or
insects. Apart from that, the detection of an obstacle within
the specified short minimum distance would be too late to
initiate an avoidance maneuver.
4.1. Obstacles
For obstacle avoidance during forward flight, we only con-
sider the closest obstacle in front of the helicopter. The clos-
est frontal obstacle is defined as the 2D reflection point with
the minimum x-component in a frontal detection window.
The set of 2D reflection points Sf in the detection window
and the horizontal distance df to the closest frontal obstacle
are given by
Sf = {(xi, zi) : |zi| ≤ 1
2
detectionWindowSize, (xi, zi) ∈ S}, (2)
df = min{xi : (xi, zi) ∈ Sf}. (3)
If Sf is empty, df is set to zero.
For obstacle avoidance during descents, we also calcu-
late the vertical distance to the closest obstacle below the
helicopter:
Sd = {(xi, zi) : |xi| ≤ farObstacleDistance, (xi, zi) ∈ S}, (4)
dd = min{zi : (xi, zi) ∈ Sd}. (5)
If Sd is empty, dd is set to zero.
In our experiments, we did not filter reflection points
for the calculation of df to ensure we detect smallest obsta-
cles. The system also detects larger insects or birds. This
may not be wanted as such animals either avoid the aircraft
or they are small enough not to damage it. To make the sys-
tem less susceptible to such objects, a temporary filter could
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 9
be applied. However, as the situation rarely occurred in our
experiments, we did not investigate this option further.
4.2. Terrain
For terrain following, the system requires a height estimate
relative to the ground. We define the height above ground
hgnd as the intercept of the least-squares fitted line to 2D
reflection points in a detection window underneath the he-
licopter.5
The set of 2D reflection points in the detection
window is given by
Sg = {(xi, zi) : |xi| ≤ 1
2
detectionWindowSize,
|βi| ≤ 1
2
detectionWindowAngle, (xi, zi) ∈ S}, (6)
where βi = atan2(xi, zi). The detectionWindowAngle condi-
tion limits the number of samples used for the line fit. If
there are fewer than two samples, no line fitting is per-
formed and hgnd is set to zero.
Apart from a height estimate, the line fit also provides
an estimate of the slope angle of the terrain. However, we
did not see the need to utilize terrain slope information for
operations at lower cruise speed in a typical rural environ-
ment.
In addition to the height above ground, we calculate a
minimum height hmin used for detecting terrain discontinu-
ities during terrain following. The minimum height is given
by
hmin = min{zj : |zi − zj | ≤ terrainPointVariation,
i = j, (xi, zi) ∈ Sg, (xj , zj ) ∈ Sg}. (7)
If there are fewer than two elements in Sg, the hmin is set to
zero. In Eq. (7), LIDAR readings are filtered by requiring at
least two reflection points at a similar distance. The spatial
filter prevents the terrain-following system from reacting to
readings that are likely to be false-positive detections.
5. FLIGHT MODES
5.1. Mode Switching
The LAOA system utilizes a hybrid control scheme with
five flight modes: hover, climb, pirouette descent, yaw, and
waggle cruise. The mode switching is modeled by a state
machine. The state diagram on the left in Figure 4 shows
possible transitions between flight modes. The central flight
mode is hover. The hover mode is an atomic state (a state
that does not encapsulate other states), whereas the four
nonstationary flight modes (shown as superstates) are flight
5Above sloped terrain, the height value is larger than the distance
of the helicopter to the closest terrain point. This is accommodated
for by parameters of the terrain-following and avoidance functions
listed in Table VI of the Appendix and the terrain discontinuity
behavior described in Section 5.8.
modes with several nested states. To ensure smooth switch-
ing to the hover mode, nonstationary flight modes are only
exited when the helicopter velocities are low and the atti-
tude is normal.
The nonstationary flight modes discussed in this pa-
per consist of acceleration, run, deceleration, and stabiliza-
tion states as depicted in the state diagram on the right in
Figure 4. The events brakeThreshold, velocityReached, closeTo-
Target, farFromTarget, and stableHover are sent from a con-
current state machine that analyzes the helicopter state in
relation to the reference values of the current flight mode.
The lowVelocity event is sent from a concurrent state machine
that monitors the velocity of the helicopter. The queryVelocity
event is used to request an analysis of the current helicopter
velocity. This is necessary as the lowVelocity event could be
sent while the state machine is not in a state reacting to
the event. A hoverMode event aborts a nonstationary flight
mode.
5.2. Controllers
The LAOA system requires an underlying external control
system that tracks the longitudinal, lateral, and vertical he-
licopter velocity reference vc
xyz = (vc
x, vc
y, vc
z) in the leveled
body frame xB of the helicopter (see Section 4) and the yaw
angle reference ψc
. We assume the controllers are decou-
pled and acceleration-limited. The maximum linear and an-
gular accelerations are an order of magnitude higher than
those required for linear and angular velocity changes com-
manded by the LAOA system. The superscript ’c’ is used for
references for the external control system. For other refer-
ences, we use the superscripts ’f’ or ’v’ for fixed or variable
values.
The LAOA system includes three decoupled SISO PI
position controllers Cpx (e), Cpy (e), and Cpz (e) where e is the
control error. The position controllers produce the refer-
ences for the external velocity controllers. The integral term
is usually only required for compensation of steady state
errors. We determined the gains of the position controllers
empirically based on the critical points found through flight
testing.
Height is either defined as height above ground for low-
altitude flights or height above takeoff point for flights be-
yond safe detection range of the LIDAR system. The height
above the takeoff point is measured with a barometric al-
timeter with the reference pressure set to the pressure at
the takeoff point. The vertical position is regulated inde-
pendently of the horizontal position. The terrain-following
behavior emerges from regulating the height above ground
during cruise flight.
5.3. Hover Mode
The hover mode requires a horizontal position reference
pf
NE in the earth-fixed NED frame and a height reference hf
.
Journal of Field Robotics DOI 10.1002/rob
10 • Journal of Field Robotics—2013
Figure 4. Flight modes of the LAOA system (left) and typical states of a nonstationary flight mode (right)
Depending on the configuration, the height reference is ei-
ther defined as height above ground (negative value) or
height in the earth-fixed NED frame. The corresponding
height estimates are h = −hgnd (see Section 4.2) or, respec-
tively, h = pD.
The velocity references for the external controllers are
calculated as follows:
vc
x = Cpx( px), vc
y = Cpy( py), vc
z = Cpz(hf
− h),
(8)
where px and py are components of the position error
vector given by
px
py
=
cos ψ sin ψ
− sin ψ cos ψ
pf
NE − pNE , (9)
where pNE is the estimated position of the helicopter in the
earth-fixed NED frame.
The yaw angle is fixed and controlled by the external
helicopter control system (ψc
= ψf
). For yaw angle changes,
we use the yaw mode that is described in Section 5.5.
5.4. Climb Mode
The climb mode is used for vertical height changes and re-
quires a height reference hf
. It is a nonstationary flight mode
that consists of the four main states introduced in Section
5.1. To reconfigure the control, a reconfigureControl event (see
Figure 4) is sent to a concurrent state machine that models
the interaction of the different controllers. The horizontal
position control and the yaw angle control are identical to
the hover mode. The vertical velocity references for the ex-
ternal controllers in the acceleration, run, and deceleration
states are
vc
z = a(t − t0), vc
z = v, vc
z = v − a(t − t0), (10)
where a is a fixed vertical acceleration (negative value),
v = −verticalSpeed is a fixed vertical speed, and t0 is the time
when entering the corresponding state. While being in the
state, the velocity references are calculated and passed to
the controller.
The system transitions from the acceleration to the run
state when reaching the desired vertical velocity. When
reaching a height that is close to the height reference, the
system enters the deceleration state. The distance is mainly
determined by the specified vertical deceleration of the he-
licopter. The system leaves the deceleration state when the
velocity is sufficiently low to enable hover control. If suf-
ficiently close to the height reference, the system stabilizes
the hover using the desired height as reference for hover
control. Otherwise, it uses the current height as reference
and sends an error event. If the system receives a hoverMode
event while it is in climb mode, the helicopter decelerates
and the system transitions to hover mode.
5.5. Yaw Mode
The yaw mode is used for changing the yaw angle of the he-
licopter to a desired yaw angle ψf
while the aircraft hovers.
The position control is identical to the hover mode. The yaw
angle change is achieved through an increase or decrease of
the yaw angle reference ψc
for the external yaw controller
depending on the direction of rotation. The direction of rota-
tion is determined by the smaller angle difference between
the start and the desired yaw angle of the two possible rota-
tions. The flight mode has nested states similar to the climb
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 11
-10
-5
0
5
10
0 5 10 15 20
w
ψ
Pirouette scan
Scanned area
Safe obstacle scan range
Top view
corridorWidth
Figure 5. Safe scan area of the waggle cruise flight following a pirouette plotted for the parameters specified in Table VI of the
Appendix (angles not to scale).
mode. The yaw angle references for the external yaw con-
troller in the acceleration, run, and deceleration states are
ψc
= ±1
2
α(t − t0)2
+ ψc
0 , ψc
= ±ω(t − t0) + ψc
0 ,
ψc
= ±ω(t − t0) ∓ 1
2
α(t − t0)2
+ ψc
0 , (11)
where α is a fixed angular acceleration, ω is a fixed angular
velocity, t0 is the time, and ψc
0 is the yaw angle reference
when entering the corresponding state.
5.6. Pirouette Descent Mode
The pirouette descent mode in combination with the state
machine described in Section 6.2 enables safe vertical de-
scents to desired heights hf
close to the ground. The heli-
copter descends while rotating about an axis through the
position reference point pf
NE, which is parallel to the z-axis
of the leveled body frame. Thus, the field of view of the LI-
DAR system is extended from a planar scan to a cylindrical
scan. The pirouette is performed by changing the yaw angle
reference with constant rate during the descent:
ψc
= ωz(t − t0) + ψ0, (12)
where ωz = pirouetteYawRate is a fixed yaw rate. t0 is the
time and ψ0 is the yaw angle when entering the accelera-
tion state. We did not include acceleration and deceleration
states for yaw control in this flight mode for two reasons: (1)
the external control system includes an angular acceleration
limiter and (2) accurate yaw angle control is not required
for the pirouette descent. While performing pirouettes, the
integral terms of the horizontal position controllers are set
to zero.
The flight mode has the same nested states as the
climb mode and thus we use the same state machine
(flightModeA). The yaw control is reconfigured by send-
ing the event specialMode before entering the superstate (see
Figure 4). The reference velocities are calculated in Eq. (10)
with positive values for the vertical acceleration a and ver-
tical speed v.
5.7. Waggle Cruise Mode
The LAOA system is waypoint-based and uses horizontal
straight-line paths for flights to waypoints. If waypoints are
at different heights, the height change is achieved through
the climb mode as described earlier. The waggle cruise mode
combines a straight-line path-following controller with a
waggle motion generator for extending the field of view of
the LIDAR system. The flight mode requires two references:
a 2D waypoint position p
fwp
NE and a ground track angle ψgf .
The waggle motion during forward flight extends the
field of view of the LIDAR system from a planar scan to a
corridor-shaped scan (see Figures 5 and in Section 4). The
flight mode has nested states similar to the climb mode
(see Figure 4). The height reference is fixed. Height is esti-
mated based on either distance measurements to the ground
(Section 4.2) or barometric pressure.
The only difference between waggle cruise and a sim-
ple cruise is the yaw control. Both flight modes use the
same state machine (the simple cruise mode is not shown in
Figure 4 as it is not required for the LAOA system). The
different configuration of the yaw control is realized by
sending the specialMode event before entering the super-
state.
Journal of Field Robotics DOI 10.1002/rob
12 • Journal of Field Robotics—2013
ψe
v
v
ψg
y
ψ
North
c
East
wp
yg
xg
x
p
f
NE
wp
f
pNE
Figure 6. Straight-line path following
During simple cruise, the yaw angle reference is fixed.
During waggle cruise, the yaw angle reference is given by
ψc
= ψw sin
2π
Tw
(t − t0) + ψf
g, (13)
where ψw=maxWaggleYawAngle, Tw=wagglePeriodTime, and
t0 is the time when entering the state. Similar to the pirouette
descent, there are no acceleration and deceleration states
for the yaw control during the transition from and to hover
mode. The maximum yaw rate and yaw acceleration during
waggle cruise is determined by ψw and Tw. When choosing
the two parameters, the limitations of the helicopter have to
be taken into account.
The horizontal velocity references vc
x and vc
y during
cruise flight are calculated as follows:
vc
x = vv
x + Cpx (cx ), (14)
vc
y = vv
y + Cpy (cy ), (15)
where vv
x and vv
y are the components of the path velocity
vector vv
and cx and cy the components of the cross-track
error vector c in the leveled body frame (see Figure 6). The
two position controllers Cpx and Cpy are given in Eq. (8).
The cross-track error vector c in the leveled body frame
is given by
cx
cy
= cyg
− sin ψe
cos ψe
, where ψe = ψf
g − ψ (16)
and cyg is the y-component of the cross-track error vector in
the ground track frame given by
cyg = − pN sin ψg + pE cos ψg and
×
pN
pE
= p
fwp
NE − pNE. (17)
The velocity vector vv
in the leveled body frame is given
by
vv
x
vv
y
= vv cos ψe
sin ψe
, (18)
where vv
are the desired path speed values that are gen-
erated during the acceleration, run, and deceleration states
of the nonstationary flight mode. The path speed values
are calculated in Eq. (10) with a=horizontalAcceleration and
v=waggleCruiseSpeed (fixed values).
The velocity references for the external controllers for
the experiments described in this paper are given by
vc
x
vc
y
=
cos ψe − sin ψe
sin ψe cos ψe
vv
vyg
, (19)
where vyg = Cpy(cyg ). In this method, the velocity references
are first calculated in the ground track frame and then
transformed into the leveled body frame. Contrary to this
method, the first method allows us to have different gains
for x and y position control. This makes sense if the exter-
nal velocity controllers behave differently for forward and
sideward flight, which is likely to be the case for single-rotor
helicopters.
5.8. Terrain Following
The terrain-following behavior emerges from regulating
the height above ground during cruise flight. Terrain fol-
lowing is activated and deactivated through the events
lowAltitudeFlightOn and lowAltitudeFlightOff, which are sent
to the height controller while executing the state machine
for height changes described in Section 6.2. During terrain
following, the height controller uses h = −hgnd estimates
from the terrain detection module (see Section 4.2).
In the case of detection of a terrain discontinuity caused
by a vertical structure or something similar, a special behav-
ior is activated: verticalOffset is added to the height observa-
tion h if the hmin value is less than discontinuityHeight. The
offset is removed after the specified decay time verticalOffset-
Decay. When during the decay time another discontinuity is
detected, the decay time will start counting again. The maxi-
mum vertical velocities commanded by the vertical position
controller must be limited to stay within the flight envelope
of the helicopter. In particular, commanding a high descent
velocity must be avoided as it could cause the helicopter
to enter the vortex ring state. In our implementation, the
vertical velocities are limited to the verticalSpeed value (see
Table VI of the Appendix).
6. LOW-ALTITUDE FLIGHT
We define a low-altitude flight as a flight that is performed
below typical tree top height in a rural environment. For safe
operations close to terrain and obstacles, the helicopter must
keep a minimal distance from objects. The minimal distance
is mainly limited by the characteristics of the aircraft’s guid-
ance & control system and the error of range measurements
within the specified environmental conditions. The meth-
ods we propose have parameters to adapt the LAOA sys-
tem to different helicopter systems and environments. The
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 13
Figure 7. State diagram describing start, finish, and abort of a
low-altitude flight. It is concurrent with the flight mode switch-
ing state machine (Figure 4). All state machines shown in the
remainder of this section are encapsulated in state 2.
parameters of our implementation are provided in Table VI
of the Appendix.
The methods described in this section are reactive, and
state machines are used to model the behavior of the he-
licopter. The events of the state machines are defined in
Table VII and the variables used in the state diagrams are
calculated according to Table VIII. All state machines mod-
eling flights at low altitude are encapsulated in the lowAlti-
tudeFlight superstate of the top-level state machine for low-
altitude flight as depicted in Figure 7. It is assumed the he-
licopter is in hover mode before executing and terminating
encapsulated top-level state machines.
We developed two obstacle avoidance strategies: a rel-
atively simple strategy that is suitable for many rural ar-
eas with isolated obstacles such as single trees, and a more
complex strategy that reliably guides the helicopter to a
goal point in more complex environments. Apart from the
assumptions mentioned in the previous sections, we as-
sume that the goal point can be reached safely, i.e., there
exists a path with sufficient width. If no such path ex-
ists, the helicopter tries to reach the goal point until a
low fuel warning is sent, which will abort the low-altitude
flight.
In the following paragraphs, we consider configura-
tions of obstacles with gaps smaller than the minimum path
width required for the helicopter to pass through as a single
obstacle. Both avoidance strategies ensure a safe distance
to obstacles. The first strategy was developed primarily
for testing the overall system including obstacle detection
and flight modes. In both strategies, the helicopter performs
waggle cruise flights until it detects obstacles. If an obstacle
is detected, the helicopter decelerates and switches to the
hover mode. Then it scans the environment for obstacles
and calculates and avoidance waypoint.
6.1. Top-level State Machine for Low-altitude
Flight
The state diagram in Figure 7 describes the start, finish, and
abort of a low-altitude flight. When entering the state ma-
chine, the system checks if the helicopter is in hover mode.
It sends a queryHover event to the mode-switching state
machine and waits in state 1 for the hovering event. If the
mode-switching state machine does not reply after a speci-
fied time ( timeout condition), a transition from state 1 to
the final state is made and an error event sent. Otherwise,
the system enters the lowAltitudeFlight superstate (state 2).
A low-altitude flight is aborted when any of the events
causing a transition to state 5 occurs. The events are either
sent from a concurrent state machine that monitors the sys-
tem or from state machines encapsulated in state 2. If the
flight is aborted, the system goes through a deceleration
state before performing a vertical climb to safeAltitude in
state 4. A safe altitude is a height where it is safe to fly with-
out obstacle and terrain detection. In the normal case, the
low-altitude flight state 2 is terminated in hover mode and
a transition to state 3 is made in which the helicopter climbs
directly to a safe altitude.
6.2. Height Change
The helicopter must hover at a specified terrain following
height (heightRef=cruiseHeight) before any of the two avoid-
ance strategies mentioned in the previous paragraph can
be applied. Usually, the helicopter is at a different height
above ground and sometimes the height above ground is
unknown. The method described in this section guides the
helicopter safely to the terrain-following height. It may also
be applied to change the height after the helicopter has ar-
rived at the goal point (heightRef=goalHeight). All height
changes are performed following a vertical flight path.
The state diagram in Figure 8 describes the height
change method. All descents are performed using the pirou-
ette descent mode to make sure that the helicopter does not
collide with any surrounding obstacles. The first pirouette
is flown without height change as initially no assumption
about free space is made other than that it is safe to ro-
tate. For ascents, the helicopter does not fly pirouettes as we
assume there are no overhead obstacles. If the helicopter
is beyond the sensing range of the LIDAR system, it will
perform a pirouette descent until it reaches a safe sensing
range (safeLidarHeight event) to determine the height above
ground. When it reaches the safe sensing range, terrain fol-
lowing will be enabled and height control switched to LI-
DAR readings (lowAltitudeFlightOn). The final pirouette is
also flown without height change to ensure there is no ob-
stacle in any direction of departure.
If during a descent an obstacle is encountered within
a specified range, a pirouetteObstacle event is sent from
a system monitor. This causes a transition from the
Journal of Field Robotics DOI 10.1002/rob
14 • Journal of Field Robotics—2013
Figure 8. State diagram describing the method for vertical
height changes at low altitude.
lowAltitudeFlight state to the decelerating state in Figure 7
and thus aborts the height-change procedure.
6.3. Obstacle-avoidance Strategy 1
The basic idea of strategy 1 is to combine a motion-to-goal
behavior with a motion-to-free-space behavior. The motion-
to-goal behavior is the attempted direct flight toward the
goal point. The motion-to-free-space behavior is exhibited
when attempting to reach free space by first flying toward an
avoidance waypoint (avoidanceWp) and then flying toward
an assumed free-space waypoint (freeSpaceWp1) in the direc-
tion of the goal point. The helicopter is in free space when
it reaches the free-space waypoint or when it reaches the
start waypoint. The helicopter flies to all waypoints using
the waggle cruise mode. The strategy is illustrated further
in an example below.
The three state diagrams in Figure 9 describe Strategy
1. The left state diagram in Figure 9 contains the states of
the top-level state machine. The superstates representing
the two main behaviors of the helicopter are states 3 and
4. The transition from the motion-to-goal to motion-to-free-
space behavior is made when the helicopter detects a frontal
obstacle (farObstacle event).
The waggleCruise state machine is used in several state
machines. It consists of a state for calculating the ground
track angle, a state for initializing the bearingAngle variable
needed in both strategies, a state for aligning the helicopter
with the ground track, and a state for the actual flight to the
specified waypoint.
The motion-to-free-space behavior is modeled by the
freeSpaceFlight state machine. In strategy 1, the initial avoid-
ance direction is predefined (startDirection). However, the
direction will be changed after a specified number of un-
successful attempts to avoid the obstacle (state 6). When
the direction is changed, the helicopter flies to the last start
point before flying again toward the goal point. The spec-
ified number of attempts constrains the size of an obstacle
that can be avoided.
An obstacle-avoidance flight using strategy 1 is illus-
trated in Figure 10. It shows a flight from a start point
defined by the helicopter’s initial position to a goal point
Figure 9. State diagrams describing obstacle-avoidance strategy 1.
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 15
Start point
avoidanceWpDistance
avoidanceWp Obstacle
Goal point
farObstacleDistance
freeSpaceWp1
freeSpaceWpDistance
avoidanceAngle
Figure 10. Illustration of obstacle-avoidance strategy 1
(startDirection=-1).
with a predefined avoidance direction to the left. The heli-
copter detects an obstacle, flies to the left and toward the
obstacle while keeping a safe distance from the obstacle un-
til it reaches free space, and eventually it flies to the goal
point. Flight test results of strategy 1 including flights be-
yond visual range are given in Section 7.
The first strategy will not succeed in guiding the heli-
copter to a specified goal point in environments that contain
larger concave-shaped obstacles. In such environments, the
helicopter could get trapped, a phenomenon often observed
in reactive systems. The key parameter defining the admis-
sible curvature of the obstacle shape is the distance from
the point the helicopter detects the first obstacle to the goal
point. The required minimum path width is mainly deter-
mined by the distance condition of the farObstacle event (see
Table VII).
6.4. Obstacle-avoidance Strategy 2
This strategy was developed for rural areas with more
complex-shaped and larger obstacles. Although still being
reactive and computationally simple, the proposed strat-
egy reliably guides the helicopter to a goal point. It suc-
ceeds even in environments with concave-shaped obstacles
as long as the boundary length of an obstacle is limited as
defined below. The algorithm of the strategy is similar to
the Bug2 algorithm. Our strategy is designed for the waggle
cruise obstacle detection method and considers real-world
constraints such as safety distances from obstacles, limited
sensing range, limited accelerations, and uncertainties in ob-
stacle location, state estimation, and control. Furthermore,
it employs heuristics and utilizes assumptions about the
environment to be more efficient. The Bug2 algorithm is a
greedy algorithm that is complete. The algorithm of strategy
2 is not complete for the general case. However, for certain
cases it is equivalent to Bug2 and it successfully terminated
in all experiments we have conducted.
The state diagrams in Figures 11, 12, and 13 describe
strategy 2. The basic idea is the same as in Bug2, i.e., combin-
ing a motion-to-goal behavior (state 3 in Figure 11) with a
wall6
following behavior (state 14 in Figure 11). The strategy
is illustrated further in an example below. The key differ-
ences to Bug2 are the directionScan state (state 11 in Figure 11)
for deciding in what direction to circumnavigate an obstacle
and different conditions when to abort the wall-following
behavior.
The state diagram and the pseudo code in Figure 12
describe the method for deciding the wall-following direc-
tion. The basic idea is to rotate from the left to the right
while hovering in front of a detected obstacle and deciding
on the direction that offers more free space.
The wall-following method utilizes the obstacle-
detection methods and flight modes introduced in
Sections 4 and 5. The two state diagrams in Figure 13 de-
scribe the method. The basic idea is to find a distant point on
the boundary of the obstacle in the wall-following direction,
rotate the helicopter to a certain angle relative to that point
away from the wall, fly a certain distance avoidanceWpDis-
tance in that direction, and repeat. To find a distant point on
the boundary, the helicopter first rotates toward the wall7
until the wallCatch event is sent and then rotates away from
the wall until the wallRelease event is sent. The distant point
is the point on the boundary at which the helicopter is point-
ing when the wallRelease event is sent.
The avoidance behavior of the helicopter may dif-
fer depending on its yaw angle before commencing a
rotate-toward-wall behavior. Before the helicopter enters
the corresponding state (state 10 in Figure 13), it rotates
to the startAngle (state 3). The helicopter then either points
in the direction of the last ground track (startAngle =
bearingAngle) or the bearing to the last detected obsta-
cle plus or minus an offset angle depending on the cur-
rent wall-following direction (startAngle = obstacleBearing +
avDirection · offsetAngle). The latter should prevent the heli-
copter from missing an obstacle detected during waggle
cruise flight while searching for it while hovering. How-
ever, the offset angle method has been developed at a later
stage and has not been flight-tested.
The key events that make our avoidance strategy differ-
ent from Bug2 are closeToGoal, outside, progress, maxAttempts,
and continue. All events abort the wall-following (state 14
in Figure 11). The progress and closeToGoal events are similar
to the events aborting the wall following in Bug2. Examples
of scenarios in which the key events occur can be found in
Section 6.5.
There are some important differences in the conditions
that must hold to produce the progress event (see Table VII):
at least one avoidance waypoint must have been generated,
it is sufficient to be close to the line to the goal point, and
a minimum progress distance is required. The line to the
goal point is not, as in Bug2, the line through the start point
6Here, a wall is the boundary of an object in a 2D plane.
7A rotation toward the wall means rotating clockwise if an obstacle
is circumnavigated clockwise and rotating anticlockwise otherwise.
Journal of Field Robotics DOI 10.1002/rob
16 • Journal of Field Robotics—2013
Figure 11. State diagram describing obstacle-avoidance strategy 2 (top-level state machine).
Figure 12. State diagram and pseudocode describing the method for deciding the wall-following direction.
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 17
Figure 13. State diagram describing the wall-following method.
and the goal point (m-line) but rather the line through the
progressWp point and the goal point. The progressWp point is
initially the start point but might be changed to a different
point during the flight if the helicopter gets closer to the goal
point after a specified number of attempts. The minimum
progress is evaluated by comparing the distance from the
current position to the goal point with the distance from the
point at which the wall following started (wallFollowingStart
point) to the goal point.
Strategy 2 generates a behavior similar to the Bug2 al-
gorithm if (1) the geometry of obstacles in an environment
is such that during the execution of the state machine, the
wall following will only be aborted by the progress or close-
ToGoal events; (2) no height changes occur that change the
perceived geometry of obstacles; (3) uncertainties in percep-
tion and control are neglected. However, our strategy does
not check if a goal point is reachable. We assume a path
exists.
Strategy 2 uses the corridorObstacle event for obstacle
detection instead of the farObstacle event used in strategy
1. The corridorObstacle is sent when obstacles are detected
inside a corridor of a specified length and width, which is
aligned with the flight path (see Figure 5 in Section 5.7).
An obstacle-avoidance flight using strategy 2 is illus-
trated in Figure 14. The scenario is the same as in Figure 10
for strategy 1: a flight from a start point to a goal point with
one obstacle in between. In strategy 2, the helicopter detects
the obstacle, performs a scan to decide for a wall-following
direction, follows the boundary of the obstacle at a safe dis-
tance until it gets close to the original path, aborts the wall
following, and eventually flies to the goal point. Flight-test
results using strategy 2 are provided in Section 7.
6.5. Obstacle-avoidance Scenarios
In this section, we illustrate the behavior of the heli-
copter for strategy 2 in special scenarios. All scenarios were
Journal of Field Robotics DOI 10.1002/rob
18 • Journal of Field Robotics—2013
Start point
avoidanceAngle
Obstacle
wallReleaseDistance
avoidanceWpDistance
Goal point
corridorWidth
avoidanceWp
corridorLength
progressPathTolerance
Direction scan
Figure 14. Illustration of obstacle-avoidance strategy 2.
encountered during flight tests. The scenarios also demon-
strate the application of the events aborting the wall-
following behavior. Figure 15 depicts nine different scenar-
ios. In the drawings, a line with an arrow represents the
approximate path of the helicopter while exhibiting either
a motion-to-goal or a wall-following behavior. When the
helicopter changes its behavior, a new line is drawn. If the
arrow of a line is not filled, it means the complete path is
not shown.
Figure 15(a) illustrates a typical cul-de-sac scenario.
This is an example in which strategy 1 would fail. When
strategy 2 is applied, the helicopter first flies toward the
goal point, then follows the wall until the progress event is
sent (i.e., it gets close to the line from the start to the goal
point). Finally, it flies to the goal point.
Figure 15(b) demonstrates the application of the closeTo-
Goal event. The event is sent at point A. The event is similar
to the event in Bug2 when a goal point is reached during
wall following. However, the closeToGoal event allows for
uncertainty in helicopter position and it makes the strat-
egy more efficient. Without the event, the helicopter would
continue following the wall and fly past the goal point.
Figure 15(c) illustrates the situation if the helicopter
follows a wall until the maxAttempts event is sent at point
A. An evaluation of the situation shows that progress has
been made during the wall following (wallFollowingProgress
event is sent as the right dotted line is shorter than the left
dotted line), hence the helicopter continues following the
wall. Figure 15(d) illustrates the situation if the helicopter
follows a wall until the maxAttempts event is sent at point
A and no progress has been made during wall following.
The helicopter then continues flying toward the goal point.
When detecting an obstacle at point B, it follows the wall in
the other direction.
Figure 15(e) shows the behavior of the helicopter when
during wall following the contact to the wall is temporar-
ily lost. This usually happens at sharp convex corners of an
obstacle. Losing the contact to the wall means the system en-
ters state 5 in Figure 13. In Figure 15(e), the helicopter loses
contact at point A, but when flying toward freeSpaceWp2, it
detects the wall again at point B and continues following
the wall. Figure 15(f) shows the behavior of the helicopter
when during wall following the contact to the wall is lost
and not regained. In this scenario, the contact is lost at point
A and the helicopter reaches freeSpaceWp2 at point B and
the continue event is sent. The wall following is aborted and
the helicopter flies to the goal point.
Figure 15(g) illustrates the case of a false-positive detec-
tion at point A. The system fails to detect the obstacle during
the rotation in state 10 in Figure 13 and transitions to the
final state instead of state 5 as the wall-following behavior
has not been exhibited yet. Leaving the wall-following state
means that the helicopter continues flying to the goal point.
Figure 15 (h) shows the behavior of the helicopter when
reaching the border of the search area. The search area in
which the helicopter can operate while trying to reach the
goal point is defined as a corridor along the line from the
start point to the goal point. When crossing the border at
point A, the outside event is sent. The two dashed lines near
point A illustrate the hysteresis condition, which is neces-
sary to prevent repeatedly sending the outside event. When
crossing the border, the helicopter stops, turns around, and
follows the wall in the other direction. At point B, the
progress event is not sent, as the distance from the point
at which the helicopter first detected the obstacle to the goal
point is identical to the distance from point B to the goal
point and a minimum difference is required to cause the
event.
Figure 15(i) illustrates the case in which the helicopter
follows a wall with a gap that is only detected from one side
while circumnavigating an obstacle and the helicopter finds
a path through the gap. In this situation, it could happen
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 19
(e) Lost wall contact − obstacle
A
B
A A A
A
(g) False positive
(h) Outside search area
A
(a) Cul−de−sac (b) Close to goal (c) Wall following progress (d) No wall following progress
B
B
A B
(i) Inconsistent gap sensing
A
(f) Lost wall contact − no obstacle
Figure 15. Obstacle avoidance scenarios.
that the helicopter would not stop encircling the obstacle as
the progress event is never sent. However, the maxAttempts
event stops the behavior at point A and the helicopter flies
to the goal point.
6.6. Approaching Vertical Structures
For many inspection tasks it is required to fly toward an
approximately vertical structure and stop at a close but
safe distance to collect frontal images. This can be easily
achieved with components of the LAOA system. We used
the following method for the structure inspection flights
described in Section 7: The operator defines a target point
and an approach point in the earth-fixed NED frame. The
target point is a point of the structure. The line between
the approach point and the target point defines the ap-
proach path and thus the direction from which the struc-
ture should be inspected. Given that the helicopter hovers
at the approach point, it is then commanded to fly to the
target point using the waggleCruise state machine (Figure
9). The hoverMode event must be sent once the helicopter
reaches a desired distance to the structure based on the df
value (Section 4.1). The farObstacle event can be used for the
farObstacleDistance condition, or the corridorObstacle event
for the corridorLength condition (see Table VII). After send-
ing the hoverMode event, the helicopter will decelerate and
hover.
We assume there is no obstacle between the approach
point and the target point except for the structure itself.
The approach can be performed at any specified height.
Height changes are possible either at the approach point or
at the point the helicopter stops (inspection point) using the
height-change method described in Section 6.2. However,
it should be taken into account that typically for descents
more free space is required than for ascents because of a
higher control error of the pirouette descent flight mode.
Therefore, it is often necessary to descend at the approach
point. If there is not sufficient space either, a descent must
be conducted at a third point (descent point). The helicopter
can then be flown to the approach point using strategy 1 or
2 as described in Sections 6.3 and 6.4. Once the helicopter
hovers at the inspection point, it starts capturing images. At
Journal of Field Robotics DOI 10.1002/rob
20 • Journal of Field Robotics—2013
the same time, it yaws to the left and right to increase the
field of view of the inspection camera.
7. EXPERIMENTAL RESULTS
In this section, we present experimental results of the LAOA
system implemented on one of CSIRO’s unmanned heli-
copters (see Figure 1 in the introduction). Technical spec-
ifications of the base system components can be found in
Table V of the appendix. All experiments were conducted
in unstructured outdoor environments without the use of
maps.
Flights were conducted at two sites: an industrial site
in a natural bushland setting in Brisbane and a farm in rural
Queensland, Australia. The first site is the QCAT flight test
area. It is about 180 m × 130 m in size and contains several
natural and man-made obstacles such as trees of various
sizes, bushes, a microwave tower, fences, two sheds, two
vehicles, and other small obstacles. The second site is the
Burrandowan flight-test area. It is a typical rural environ-
ment of more than 200 ha and includes areas with rough
terrain and varying slope. All flights were conducted in
compliance with the Australian regulations for unmanned
aircraft systems.
We tested our LIDAR system in both environments
under different ambient light conditions. The two critical
parameters of the LIDAR system safeLidarHeight and safeOb-
stacleScanRange are provided in Table VI of the Appendix.
All obstacle avoidance flights were conducted 10 m
above ground with terrain following enabled. In most ex-
periments we flew the helicopter manually to a descent
point. After switching to autonomous flight, it was com-
manded to descend to terrain-following height using the
height-change method described in Section 6.2 and then
to conduct the experiment as specified in a state machine.
In more complex missions, such as the one described in
Section 7.6, the low-altitude flight was part of a flight plan
with several predefined waypoints that was executed by a
waypoint sequencer.
The two most important experimental results are the
system’s performance with regard to safe and reliable au-
tonomy. Safe autonomy is the ability of the LAOA system
to perform a low-altitude flight without human interaction
within its specified limits without causing damage to the
environment or the helicopter. Safe autonomy does not im-
ply that all specified goal points are reached. The system
may abort a low-altitude flight for safety reasons. Reliable
autonomy is the ability of the system to reach specified goal
points.
Table I shows that despite performing a significant
number of runs in different scenarios, we did not encounter
a failure. This demonstrates the safety of the system. The
table includes flights conducted during and after devel-
opment of the system. The table does not contain flights
Table I. Safe autonomy of the LAOA system.
Method Runs Scenarios Flight time (h) Failuresa
Terrain following 73b
11 11.4 0
Avoidance strategy 1 27c
7 2.8 0
Avoidance strategy 2 23c
6 7.1 0
aA failure is when damage occurs during a run or a backup pilot
has to take over to prevent damage.
bThe helicopter followed more than 14 km of terrain at 10 m height
above ground.
cThe helicopter encounters at least one obstacle during waggle
cruise flight with terrain following from a start point to a goal
point.
Table II. Reliable autonomy of the LAOA system.
Method Missions Scenarios Successesa
Avoidance strategy 1 20 7 20
Avoidance strategy 2 17 6 17
a A mission is successful if the helicopter autonomously reaches all
specified low-altitude goal points.
where assumptions of the task specification were violated.
The task specification of our system includes the following
critical assumptions (see Tables V and VI of the Appendix):
the helicopter navigation system operates according to its
specification, and the horizontal and vertical wind speed is
within the specified limits. Failures that occurred because
of a violation of a critical assumption are described further
below.
Table II contains results of the deployment of the sys-
tem in several missions8
using the two obstacle-avoidance
strategies described in Section 6. All missions were suc-
cessfully executed. It should be mentioned that we con-
ducted many more experiments during the development of
the avoidance strategies and that there were cases in which a
specified goal point was not reached. These cases were thor-
oughly analyzed and the system was modified accordingly.
The missions included in Table II, however, were executed
after the development was completed.
During the development of the LAOA system, we had
three cases in which a backup pilot had to take over control.
In the first case, the GPS of the navigation system failed
possibly due to radio interference with a microwave tower.
In the second case, the control system could not cope with a
strong wind gust while the helicopter was close to a tree. In
the last case, the helicopter was pushed toward the ground
8The difference between a run and a mission is that a run is a seg-
ment of a flight related to a specific experiment whereas a mission
is a flight to accomplish a specified task. A mission may include
several runs.
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 21
Table III. Empirical control errorsa
of flight modes during low-
altitude flight.b
Mode h95 (m) p95 (m) ψ95 (deg) v95/ME (m/s)
hover 0.8 1.8 9
waggle cruise 0.8 1.6c
0.7/0.1
pirouette descent 4.5 0.4/0.0
climb 2.4 0.3/0.0
yaw 1.0 2.9
ax95 = 95th percentile of absolute error, xME = mean error,
h = height error, p = horizontal position error, ψ = yaw error,
v = horizontal/vertical velocity error.
b10 m height above ground; 4–7 m/s wind speed at ground station
location.
ccross-track error.
by a strong downdraft. In all three cases, a critical design
assumption was violated. The risk of failure during a BVR
flight was decreased by increasing the safety distance to
obstacles and by avoiding flights in potentially bad weather
and areas with GPS problems.
7.1. Control Errors
The parameters of the terrain following and avoidance func-
tions listed in Table VI of the Appendix depend, among
others, on the control errors. Table III shows control errors
for the different flight modes used in the LAOA system.
The errors were estimated empirically from flight data of
the CSIRO helicopter. The errors depend much on the un-
derlying external control system. When estimating control
errors empirically, it is important to collect flight data in
representative environmental conditions.
Figure 16 shows the tracking performance for the yaw
angle during pirouette descent and waggle cruise flight.
Accurate yaw angle tracking is not important for obsta-
cle detection as long as the specified scan area is covered.
We conducted some longer pirouette descents to see if we
would encounter problems because of the sustained rota-
tion of the helicopter. We tested pirouette descents with
a height change of more than 30 m without noticing any
problems.
7.2. Terrain Following
Terrain following is a key functionality of the LAOA system.
Hence, we conducted many experiments to investigate if the
simple method we propose is adequate. The flights were
conducted with and without waggle motion. We did not
notice a deterioration in performance with waggle motion.
Terrain following was flight-tested for cruise speeds up to
2 m/s and down to 5 m height above ground in hilly terrain.
However, due to the frontal sensing range limitation of our
LIDAR system, we limited the cruise speed for obstacle-
avoidance flights to 1 m/s. The height error in Table III for
the waggle cruise mode with terrain following is estimated
from flights with 1 m/s cruise speed.
Figure 17 shows the path of two terrain-following
flights conducted at the QCAT site: a flight around the com-
pound (descent point to end point) without waggle motion
with 2 m/s cruise speed at 5 m height above ground and a
flight across the compound (point 1 to point 2) with waggle
motion at 1 m/s cruise speed at 10 m height above ground.
In the first flight, the helicopter was commanded to fly
to a series of waypoints, defining an obstacle-free flight path
of about 450 m around the compound. The corresponding
altitude plot is shown in Figure 18. The plot contains three
different heights: The height above takeoff point (−pD) is
estimated based on barometric pressure. The height above
ground is the hgnd value (it does not show terrain disconti-
nuities as it is the intercept of the line fit described in Sec-
tion 4.2). The terrain height is estimated by subtracting the
height above ground from the height above takeoff point.
The helicopter was manually flown to the descent point at
a height of approximately 22 m above takeoff point. At that
height, no LIDAR-based height estimates were available.
After switching to autonomous flight, the height-change
state machine (Figure 8) was used to perform a descent to the
terrain-following height. The height control was changed
from pressure-based height to LIDAR-based height and the
terrain following was enabled when the lowAltitudeFlightOn
event was sent.
For the waypoint flight, we used a waypoint sequencer
that utilized the simple cruise mode (see Section 5.7). Dur-
ing 370 s of terrain following, the helicopter maintained the
specified clearance from the ground with approximately
1 m error in height regulation. Figure 19 shows the slope
Figure 16. Yaw angle tracking during pirouette descent (left) and waggle cruise flight (right).
Journal of Field Robotics DOI 10.1002/rob
22 • Journal of Field Robotics—2013
10m
Point 2
Descent point
A
Point 1
End point
Figure 17. Two terrain-following flights at the QCAT site:9
flight around compound (descent point to end point) and flight
across compound (point 1 to point 2). At point A, the LAOA
system detected a terrain discontinuity.
Figure 18. Altitude plot of the terrain-following flight around
the compound.
Figure 19. Estimated terrain slope angle during the terrain-
following flight around the compound.
angle computed by the terrain-detection system. Slope in-
formation has not been used in the current implementation
but might be useful for flights at higher speed.
The performance of the height-offset method described
in Section 5.8 is demonstrated with the terrain-following
flight across the compound. Figure 17 only shows the part
9Aerial imagery of the QCAT site copyright by NearMap Pty Ltd.
and used with permission.
Figure 20. Altitude plot of the terrain-following flight across
the compound.
of the flight related to the crossing of the compound. The he-
licopter hovered at low altitude at the start of the run (point
1) and the end of the run (point 2). During the flight, many
terrain discontinuities, such as the fence and the roof, were
encountered. The altitude plot in Figure 20 shows when the
height offset was added to keep a safe distance to terrain
obstacles. The first discontinuity was detected at time A in
the altitude plot or point A in the map just before the fence
of the compound. The offset was removed at time B in the
altitude plot, which corresponds to point 2 in the map. Point
2 was above the fence on the other side of the compound.
We let the helicopter hover for an extended time at point 2
to demonstrate the behavior of the helicopter descending to
the terrain-following height, detecting the terrain disconti-
nuity (fence), and again applying the height offset at time
C. During the whole flight, the helicopter kept a safe dis-
tance to terrain obstacles and the system did not abort the
low-altitude flight.
In a few cases of testing the LAOA system, the system
aborted a low-altitude flight and climbed to a safe altitude
because the closeObstacle event was sent (see Figure 7). It
mostly occurred while flying over complex terrain obsta-
cles such as low roofs, fences, and antenna poles inside the
compound at the QCAT site. However, in most cases the
height offset method prevented the helicopter from getting
too close to terrain obstacles and from aborting the low-
altitude flight.
7.3. Wall Following
The wall-following behavior is essential for avoidance strat-
egy 2. To demonstrate the performance of the wall-following
method described in Figure 13, we present a flight that was
conducted during the development of avoidance strategy
2. The flight path is shown in Figure 21. The helicopter was
commanded to fly to waypoints 1 and 2 at the QCAT site.
While trying to reach the waypoints, the helicopter exhib-
ited a long wall-following behavior along the boundary of
a forest and groups of trees without sufficient clearance to
fly in between. Again the height-change state machine was
used to descend to terrain-following height. After detecting
the first obstacle, the helicopter stayed in the wall-following
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 23
Figure 21. Wall following flight at the QCAT site.
state 14 of the state machine of strategy 2 (Figure 11) for most
of the flight.
Apart from the flight path, Figure 21 also shows
the obstacle points that correspond to the distances df of
the closest frontal obstacles (Section 4.1). As can be seen
in the figure, the helicopter keeps a safe distance from ob-
stacles during wall following. With the parameters used in
our experiments (see Table VI of the Appendix), the flight
time per generated avoidance waypoint is approximately
1 min and the distance between two avoidance waypoints
is approximately 12 m.
7.4. Avoidance Strategy 1
Avoidance strategy 1 is quick to implement and useful
for testing obstacle detection and flight modes. However,
strategy 2 outperforms strategy 1 in finding a path in com-
plex scenarios. Hence, we will only present results of two
obstacle-avoidance flights using strategy 1. The flights are
interesting as they were part of two complex infrastructure
inspection missions that were executed beyond visual range
without a backup pilot at the Burrandowan site. The com-
plete missions are described in Section 7.6; here we focus on
the obstacle-avoidance part. The paths of the two obstacle-
avoidance flights are shown in the top left corners of
Figures 23(a) and 23(b).
In both missions, the helicopter performed a pirouette
descent at the descent point (start point) and was com-
manded to fly to an approach point (goal point). Between
the two points was a big tree. In the first mission, the pre-
defined avoidance direction was to the right, in the second
mission it was to the left. During the first low-altitude flight,
the LAOA system generated five waypoints to avoid the big
tree. During the second low-altitude flight, more waypoints
were generated as the initial avoidance direction was to the
left and there was no safe path to circumnavigate the tree
clockwise. The helicopter returned to the descent point af-
ter several attempts, changed the avoidance direction, and
succeeded to circumnavigate the tree anticlockwise as it did
in the first flight.
The implementation of strategy 1 used for the two
flights was slightly different from the final version: the he-
licopter did not stop at freeSpaceWp1 points and there was
an implementation fault in the processing of short LIDAR
range measurements when the helicopter was in obstacle-
free space. The latter caused the generation of one unneces-
sary avoidance waypoint during the first flight before pro-
ceeding to the approach point [Figure 23(a)]. The problem
has since been fixed. The switching to the hover mode at
freeSpaceWp1 points was introduced after simplifying a state
machine in the current implementation.
7.5. Avoidance Strategy 2
We put significant effort into the development and testing of
avoidance strategy 2. Since completing the development, we
have not encountered a situation in which the LAOA system
has not reached a goal point. In the following, we present six
missions executed after completing the development. The
flight paths are shown in Figure 22.
A typical avoidance flight with a single obstacle in the
flight path is shown in Figure 22(a). When detecting the
tree between the start and the goal point, the LAOA system
decided to fly to the left using the method described in
Figure 12 in Section 6. From the helicopter’s perspective, the
gap between the compound on the right and the tree was not
big enough and it detected more free space on the left. The
system circumnavigated the tree clockwise by generating
five avoidance waypoints before the wall following was
aborted by the closeToGoal event.
Figure 22(b) shows a simple inspection mission. The
task was to take aerial photos of a ground object with a digi-
tal camera mounted underneath the helicopter. The ground
object was defined by its position in the earth-fixed NED
frame (inspection point). The mission was started at high al-
titude and included a pirouette descent at the descent point.
The helicopter then had to fly from the descent point (start
point) to the inspection point (first goal point) and return
to the descent point (second goal point). As in the previous
scenario, the helicopter circumnavigated the tree clockwise,
however on the return flight it found a path between the
sheds in the compound and the tree. The wall following
on the return flight was aborted by the progress event. The
height variation of the obstacles inside the compound was
Journal of Field Robotics DOI 10.1002/rob
24 • Journal of Field Robotics—2013
Figure 22. Six LAOA missions flown within visual range at the QCAT site using strategy 2: (a) tree avoidance, (b) inspection of
a ground object, (c) inspection of microwave tower, (d) inspection of microwave tower with different start point, (e) inspection of
several ground objects, (f) inspection with unreachable inspection point.
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 25
large. It ranged from the height of a fence to the height of a
microwave tower. Depending on from where the helicopter
approached the compound, the free space was perceived
differently and sometimes obstacles were perceived as ter-
rain (terrain obstacles). The return flight also demonstrates
the interaction of the horizontal obstacle avoidance and the
terrain-following behavior. When approaching the narrow
gap between the tree and the compound, the helicopter
climbed when detecting the fence and then had enough free
space to fly to the left to avoid the tree.
Figure 22(c) shows another inspection mission. This
time the task was to take frontal photos of a microwave
tower [Figure 26(b)] at close range using the approach
method described in Section 6.6. For this task, a digital
camera was mounted at the front of the helicopter (see
Figure 1). The start point was again the descent point and the
goal point was the approach point. The location of the mi-
crowave tower was defined by the target point. The point
where the helicopter stopped to take photos was the in-
spection point. In this mission, the helicopter successfully
avoided several trees and parts of the compound.
We conducted the tower inspection several times with
different descent points. A second example is shown in Fig-
ure 22(d). In this mission, we started west of the previous
descent point. As there was not enough space for a pirouette
descent and knowing there was enough space for a simple
descent (no rotation), we used the simple descent to enable
the terrain following. This time the helicopter avoided the
tree and the compound to the right to get to the approach
point.
Figure 22(e) shows the flight path of the most com-
plex low-altitude flight we have conducted. The task was
to take aerial photos of five ground objects. The mission
started again with a pirouette descent. On its way to the
first inspection point, the helicopter had to avoid two trees.
The second inspection point could be reached directly. Al-
though the helicopter stopped on the way to the third in-
spection point as it detected a tree in the west, it then
found enough space to continue the flight to the inspec-
tion point directly. This demonstrates how uncertainties
can change a behavior. The fourth inspection point could
be reached by flying north of the tree that was on the di-
rect path. To reach the last inspection point, the helicopter
deviated far from the direct path as it did not find a path
east of the compound. It flew between the tree north of
the compound and the sheds in the compound and fol-
lowed the boundary of the tree clockwise. The clockwise
wall-following behavior around the tree was aborted by
the outside event and the wall-following direction changed.
The helicopter followed the tree back to the compound
and found a path to the fifth inspection point west of the
compound.
The mission depicted in Figure 22(f) demonstrates what
happens if the LAOA system is commanded to fly to a goal
point that is unreachable. In this mission, the third inspec-
tion point was behind two trees and the gap between the
two was too narrow for the helicopter to safely fly through.
The helicopter successfully reached the first two inspection
points, however it exhibited an oscillatory behavior in the
region of the third inspection point. It tried to fly in be-
tween the trees to reach the point, but it always detected
a tree inside the scan corridor and continued with a wall-
following behavior along one of the trees. It changed the
wall-following direction when either an outside or noWall-
FollowingProgress event was sent. We aborted the flight after
a couple of iterations. In a mission without a backup pilot,
the lowFuel event would have aborted the low-altitude flight
(Figure 7).
During the development of the avoidance strategies,
we noticed that reactive behaviors exhibited in real-world
scenarios were often different from what we expected. Reac-
tive methods that were designed based on idealized models
of the world and tested in simulation often did not perform
satisfactorily in our real-world experiments and had to be
modified accordingly.
7.6. Missions Beyond Visual Range
The LAOA system was demonstrated during the final flight
trials of the Smart Skies Project, which took place at the
Burrandowan site. The Smart Skies Project was a research
program exploring the research and development of fu-
ture technologies that support the efficient utilization of
airspace by both manned and unmanned aircraft Clothier
et al. (2011). In the framework of the project, the CSIRO
UAS team was engaged in two main areas of research: de-
pendable autonomous flight of unmanned helicopters in
controlled airspace and dependable autonomous flight of
unmanned helicopters at low altitude over unknown ter-
rain with static obstacles. A high level of dependability was
required to permit flights beyond visual range without a
backup pilot. The new technology was envisaged to en-
able autonomous infrastructure inspection missions with
unmanned helicopters.
The objective of the final flight trials was to demon-
strate the integration of all components that have been
developed in the project. The flight trials involved sev-
eral aircraft: (1) the CSIRO helicopter described in this
paper, (2) an autonomous unmanned fixed-wing aircraft,
(3) a manned light aircraft equipped with an autopilot for
semiautonomous flights, and (4) a number of simulated
manned and unmanned aircraft. The task for the CSIRO
helicopter was to take frontal photos of a windmill on the
Burrandowan farm for inspection. The flights to and from
the inspection area were conducted in airspace shared with
the other aircraft. The airspace was controlled by a cen-
tralized automated airspace control system (ADAC) devel-
oped by our project partner Boeing Research and Technology
USA.
Journal of Field Robotics DOI 10.1002/rob
26 • Journal of Field Robotics—2013
10m
100m
ADAC flight plan
Detected obstacle points
Target point
Mission end point
(windmill)
Inspection point
BVR transition point
Ground station
Descent point
Mission start point
Original flight plan
Descent point
Original flight path
Approach point
Inspection area
Target point
a
10m
100m
Detected obstacle points
Target point
(windmill)
Inspection point
BVR transition point
Ground station
Inspection area
Target point
Descent point
Mission start point
Original flight path
Descent point
Mission end point
Approach point
b
Figure 23. The two windmill inspection missions flown beyond visual range at the Burrandowan site (satellite imagery of the
Burrandowan site copyright by Google Inc.) using strategy 1: (a) initial avoidance direction to the right, (b) initial avoidance
direction to the left.
Figure 24. Altitude plot of the windmill inspection flight shown in Figure 23(b).
Figure 25. The CSIRO helicopter approaching the windmill at the Burrandowan site. The helicopter successfully avoids the big
tree in the left image, which is in between the descent point and the windmill (see also Figure 23).
We executed the inspection mission twice, as shown
in Figures 23(a) and 23(b). The windmill (target point) was
located at about 1.4 km from the takeoff point. The inspec-
tion missions started at the mission start point and finished
at the mission end point. The missions were executed be-
yond visual range of the operator at the ground station
location and without a backup pilot. The operator could
terminate a flight but there were no means to fly the aircraft
through the ground station. The flights to and from the in-
spection area [zoomed-in area in Figures 23(a) and 23(b)]
were conducted at high altitude without static obstacles fol-
lowing a predefined flight plan. Flights at high altitude were
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 27
Figure 26. Inspection photos taken by the helicopter from approximately 10 m: (a) windmill at the Burrandowan site; (b) microwave
tower at the QCAT site.
conducted with 5 m/s ground speed and without wag-
gle motion. Other aircraft were avoided by following in-
structions the helicopter received during the flight from the
airspace control system. Once the helicopter descended into
the inspection area at the descent point, flights were con-
ducted at low altitude without other air traffic.
In the first mission, the helicopter flew to the approach
point at low altitude as described in Section 7.4 and ap-
proached the windmill using the method explained in Sec-
tion 6.6. At the inspection point, it took photos of the wind-
mill and climbed to a safe altitude. During the return flight
in controlled airspace, the helicopter was instructed by the
automated airspace control system to deviate from the orig-
inal flight plan to the west to avoid an aircraft (ADAC
flight plan). In the second mission,10
no flight plan changes
were required during the flights to and from the inspection
area. The low-altitude flight, however, was more challeng-
ing than in the first mission as the predefined avoidance
direction was to the left. This made the helicopter initially
fly into an area without a safe path to the windmill, as de-
scribed in Section 7.4.
The altitude plot of the second flight is shown in
Figure 24. The different heights were already explained
in Section 7.2. The plot depicts the three stages of the
inspection mission: the flight to the inspection area, the
low-altitude flight, and the return flight. The heights in
stages 1 and 3 were predefined in the flight plan relative
to the takeoff point. The two sudden height changes during
the low-altitude flight were caused by terrain discontinu-
ities (Section 5.8).
Figure 25 contains photos of the helicopter taken by
an observer while approaching the windmill. The observer
10A video showing the arrival of the helicopter in the inspection
area, the pirouette descent, waggle cruise flight, the approach of the
windmill, and the departure is available at http://www.cat.csiro
.au/ict/download/SmartSkies/bvr_inspection_flight_short.mov
was in radio contact with the operator at the ground station.
To meet regulatory requirements, the helicopter was also
equipped with a flight-termination system that prevented
the aircraft from leaving a specified mission area in case of
a fatal system failure. Flight termination could have been
initiated by a system monitor or the operator. The design
of the flight-termination system is beyond the scope of this
paper.
One of the inspection photos of the windmill taken
by the helicopter is shown in Figure 26(a). Figure 26(b)
contains an inspection photo taken during the microwave
tower inspection flights described in Section 7.5. Both pho-
tos were taken at an approximate distance of 10 m to the
inspection object.
8. CONCLUSION
We presented a dependable autonomous system and the un-
derlying methods enabling goal-oriented flights of robotic
helicopters at low altitude over unknown terrain with static
obstacles. The proposed LAOA system includes a novel
LIDAR-based terrain- and obstacle-detection method and a
novel reactive behavior-based avoidance strategy. We pro-
vided a detailed description of the system facilitating an
implementation for a small unmanned helicopter. All meth-
ods were extensively flight-tested in representative outdoor
environments. The focus of our work has been on de-
pendability, computational efficiency, and easy implemen-
tation using off-the-shelf hardware components. We chose
a component-based design approach including extended-
state machines for modeling robotic behavior. The state
machine-based design simplified in particular the develop-
ment and analysis of different avoidance strategies. More-
over, the state machine-based design makes it easy to extend
the proposed strategies.
We have shown that obstacles can be reliably de-
tected by analyzing readings of a 2D LIDAR system while
Journal of Field Robotics DOI 10.1002/rob
28 • Journal of Field Robotics—2013
flying pirouettes during vertical descents and waggling
the helicopter in yaw during forward flight. We have also
shown that it is feasible to use a reactive behavior-based ap-
proach for goal-oriented obstacle avoidance. We decoupled
the terrain-avoidance problem from the frontal obstacle-
avoidance problem by combining a terrain-following ap-
proach with a reactive navigation avoidance strategy for
a 2D environment. We put significant effort into the de-
velopment of an avoidance strategy that considers real-
world constraints and that is optimized for robotic heli-
copters operating in rural areas. The LAOA system can
also safely reach locations in more confined spaces as it
is not much limited by dynamic constraints and is ca-
pable of detecting obstacles in front of and below the
helicopter.
The system and methods were thoroughly evaluated
during many low-altitude flights in unstructured, outdoor
environments without the use of maps. The helicopter per-
formed many close-range inspection flights. Among others,
it flew multiple times to a microwave tower at the QCAT site
and a windmill at the Burrandowan site. Two windmill in-
spection flights were conducted beyond visual range with-
out a backup pilot. In total, the helicopter followed more
than 14 km of terrain at 10 m above ground, avoided 50 ob-
stacles with no failure, and succeeded in reaching 37 reach-
able locations. The total flight time was more than 11 h. The
experimental results demonstrate the safety and reliabil-
ity of the proposed system allowing low-risk autonomous
flight.
However, our approach has some limitations. It is less
suitable for environments with complex obstacle config-
urations that are typically found in urban areas. Here, a
mapping-based approach is likely to be more efficient. In
environments for which the system was designed, however,
mapping is often not beneficial as locations are rarely revis-
ited and the sensing range of LIDAR systems is typically
short compared to the mission area. Another limitation of
our approach is that obstacles are predominantly avoided
from the side as height changes are only performed through
terrain following during low-altitude flight. Locations that
are surrounded by obstacles that cannot be avoided through
height change are unreachable. Furthermore, steep terrain is
recognized as a frontal obstacle and the system tries to avoid
it from the side. This may result in inefficient behavior. Fi-
nally, we have not fully investigated what kind of obstacles
our system cannot detect. It mainly depends on the per-
formance of the 2D LIDAR system and the chosen system
parameters. What we can say is that the implemented sys-
tem detected all obstacles in the experiments we conducted
in relevant environments. Most obstacles were larger ob-
jects such as trees and sheds, but the system also detected
smaller objects such as fences and antenna poles. Our sys-
tem has not been specifically configured for the detection of
small or thin obstacles such as power lines.
Future work could address these limitations. In addi-
tion to improving the reactive system by including the not
yet flight-tested offset angle method and additional behav-
iors for height changes, adding a mapping component to
the system would significantly improve its efficiency. Hav-
ing a LIDAR system with longer range and a more accurate
navigation and control system would increase cruise speed
and enable flights closer to obstacles. Furthermore, a suit-
able11
3D LIDAR system would make the pirouette descent
and waggle cruise flight modes obsolete. The LAOA sys-
tem has not yet been validated on helicopters of a different
class. A procedure for determining system parameters from
characteristics of the aircraft, the sensor and control system,
and the environment would facilitate the adaption of the
system.
ACKNOWLEDGMENTS
This research was part of the Smart Skies Project and was
supported, in part, by the Queensland State Government
Smart State Funding Scheme. The authors gratefully ac-
knowledge the contribution of the Smart Skies partners
Boeing Research & Technology (BR&T), Boeing Research
& Technology Australia (BR&TA), Queensland University
of Technology (QUT), and all CSIRO UAS team members. In
particular, we would like to thank Bilal Arain and Lennon
Cork for the work on the control system of the CSIRO
unmanned helicopter and Brett Wood for the technical
support.
APPENDIX
Tables IV–IX present additional information.
11According to our requirements, a suitable 3 D LIDAR system is
dependable, lightweight, low power, easy to install, cost effective,
and has a comparable resolution and field of view (when inte-
grated).
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 29
Table IV. Nomenclature.
Variable name Symbol Descriptiona
dd vertical distance to obstacle below helicopter
df horizontal distance to obstacle in front of helicopter
do distance from helicopter to line through start point and goal point
dp distance from helicopter to line through progressWp point and goal point
h height observation for control
heightRef hf
fixed height reference
hgnd helicopter height above ground
hmin helicopter height above highest point below helicopter
θ, φ helicopter pitch and roll angles
pD helicopter height in the earth-fixed NED frame
goalPosition p
goal
NE goal point
positionRef pf
NE fixed position reference
wp p
fwp
NE fixed target waypoint for waggle cruise mode
pNE helicopter position
(ri, λi) LIDAR readings in polar coordinates
vNED helicopter velocity vector
vc
xyz velocity references for external controllers
ψc
yaw angle reference for external controller
ψf
fixed yaw angle reference
groundTrackAngleRef ψf
g fixed ground track angle reference
helicopterYawAngle ψ true heading of the helicopter
aTrue north is used for the earth-fixed NED frame. All points pNE = (pN, pE) are described by their horizontal position in the earth-fixed
NED frame. For simplicity reasons, we do not distinguish between points and their coordinate vectors.
Table V. Technical specifications of base system components.
Helicopter Vario Benzin Trainer (modified)
12.3 kg maximum takeoff weight
1.78 m rotor diameter
23 cm3
two-stroke gasoline engine
60 min endurance
Avionics L1 C/A GPS receiver (2.5 m CEP), MEMS-based AHRS, high-resolution barometric pressure sensor
Hokuyo UTM-30LX 2D LIDAR system (270◦
field of view, 30 m detection range, 25 ms scan time)
Vortex86DX 800 MHz navigation computer producing helicopter state estimates at 100 Hz
Via Mark 800 MHz flight computer running the external velocity and yaw control at 100 Hz
Journal of Field Robotics DOI 10.1002/rob
30 • Journal of Field Robotics—2013
Table VI. Parameters of the LAOA system used in the flight tests.
Strategy 1 Strategy 2
avoidanceAngle 90◦
60◦
avoidanceWpDistance 12 m 12 m
clearanceAngle – 60◦
closeObstacleDistance 5 m 5 m
closeToGoalDistance – 20 m
corridorLength – 12 m
corridorWidth – 20 m
cruiseHeight 10 m 10 m
detectionWindowAngle 90◦
90◦
detectionWindowSize 10 m 10 m
discontinuityHeight 8 m 8 m
farObstacleDistance 15 m 15 m
freeSpaceDistance – 20 m
freeSpaceWpDistance – 12 m
heightClearance (7 m) (7 m)
maxAttempts 4 10
maxHorizontalWind 10 m/s 10 m/s
maxPathDistance – 50 m
maxVerticalWind 2 m/s 2 m/s
maxWallAngle – 120◦
maxWaggleYawAngle 45◦
45◦
minimalLidarRange 1.5 m 1.5 m
minPathProgressDistance – 20 m
minWallFollowingProgressDistance – 10 m
offsetAngle – (20◦
)
outsideHysteresis – 20 m
pirouetteYawRate 45◦
/s 45◦
/s
progressPathTolerance – 5 m
safeAltitude 55 m 55 m
safeLidarHeight 15 m 15 m
safeObstacleScanRange 14 m 14 m
scanAngle – 120◦
terrainPointVariation 1 m 1 m
verticalOffset 3 m 3 m
verticalOffsetDecay 16 s 16 s
verticalSpeed 1 m/s 1 m/s
waggleCruiseSpeed 1 m/s 1 m/s
wagglePeriodTime 4 s 4 s
wallCatchDistance – 15 m
wallReleaseDistance – 15 m
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 31
Table VII. Condition-based eventsa
used in state diagrams.
Event (↔ opposite event) Condition
clearOfWall wrap(ψ − bearingAngle) ≥ avDirection · clearanceAngle where
wrap(α) = atan2(sin α, cos α)
climb (↔descend) hgnd ≤ heightRef
closeObstacle [(df < closeObstacleDistance) ∧ (df = 0)] ∨ [(hmin < closeObstacleDistance) ∧ (hmin = 0)]
closeToGoal (|p
goal
NE − pNE| < closeToGoalDistance) ∧ (closeT oGoalFlag = 0)
corridorObstacle waggle cruise mode enabled ∧ (df = 0)
∧ df cos(ψ − ψg) < corridorLength ∧ |df sin(ψ − ψg)| < 1
2
corridorWidth
directionKnown (↔directionUnknown) avDirection = 0
farObstacle waggle cruise mode enabled ∧ (df < farObstacleDistance) ∧ (df = 0)
firstAttempt (↔notFirstAttempt) attempts = 1
maxAttempts (↔moreAttempts) attempts>maxAttempts
noLidarHeight (hgnd = 0) ∧ terrain following switched on
outside [(do < (maxPathDistance − outsideHysteresis) since last outside event] ∨ (no previous
outside event) ∧ (do > maxPathDistance)
outsideFlightArea helicopter is between flight area and no fly zone
safeLidarHeight (↔noSafeLidarHeight) (hgnd < safeLidarHeight) ∧ (hgnd = 0)
pirouetteObstacle ([(df < farObstacleDistance ∧ (df = 0)] ∨ [(dd < heightClearanceb
) ∧ (dd = 0)])∧ pirouette
descent mode or yaw mode enabled when in height change state
progress (initProgressFlag = 0) ∧ (dp ≤ progressPathTolerance) ∧ [|p
goal
NE − pNE| < (|p
goal
NE − last pNE
with initProgressFlag= 1| − minPathProgressDistance)]
wallCatch df < wallCatchDistance ∧ (df = 0)
wallFollowingProgress |p
goal
NE − pNE| <
(↔noWallFollowingProgress) (|p
goal
NE − wallFollowingStart| − minWallFollowingProgressDistance)
wallRelease df ≥ wallReleaseDistance ∨ (df = 0)
aA condition-based event is sent when a condition starts to hold or when a condition holds at the time a corresponding query event is
received.
bThe heightClearance condition was introduced after we finished our experiments as we realized there are some rare cases in which the
safety distance to obstacles and terrain during a height change could be less than specified.
Table VIII. Calculation of variables used in state diagrams.
avDirection =
1 if wrap(ψgoal − lef tObstacleAngle) > wrap(rightObstacleAngle − ψgoal)
−1 else
where wrap(α) = atan2(sin α, cos α)
avoidanceWp =
pf
E + avoidanceWpDistance · sin ψa
pf
N + avoidanceWpDistance · cos ψa
where ψa = bearingAngle + avDirection · avoidanceAngle
freeSpaceWp1 =
pf
E + freeSpaceWpDistance · sin ψgoal
pf
N + freeSpaceWpDistance · cos ψgoal
freeSpaceWp2 =
pf
E + freeSpaceWpDistance · sin ψ
pf
N + freeSpaceWpDistance · cos ψ
leftScanAngle = ψgoal − 1
2
scanAngle
rightScanAngle = ψgoal + 1
2
scanAngle
ψf
g = atan2(pfwp
E − pf
E, pfwp
N − pf
N)
ψgoal = atan2(p
goal
E − pf
E , p
goal
N − pf
N)
Journal of Field Robotics DOI 10.1002/rob
32 • Journal of Field Robotics—2013
Table IX. ESM state machine language.a
↑ triggers transition when a clock signal is received
.../↑event, ... event sent during a state transitionb
∗ triggers transition if at the time a clock signal is received at least one nested state of the current superstate
is a final state or all statements of the current atomic state have been processed
superstate encapsulating nested states of container containerName
container containerName with two orthogonal regions (two concurrent states)
region encapsulating a flat state machine (no concurrent states)
aThe ESM language is similar to Harel’s state charts (Harel, 1987) and the UML state machine language. The table contains only elements
that are specific to the ESM language and necessary for the understanding of state diagrams used in this paper.
bIn this paper, state diagrams only contain events that are necessary for the understanding of a method and events are not filtered at
container boundaries.
REFERENCES
Andert, F., & Adolf, F. (2009). Online world modeling and
path planning for an unmanned helicopter. Autonomous
Robots, 27(3), 147–164.
Andert, F., Adolf, F.-M., Goormann, L., & Dittrich, J. (2010). Au-
tonomous vision-based helicopter flights through obstacle
gates. Journal of Intelligent and Robotic Systems, 57(1-4),
259–280.
Bachrach, A., He, R., Prentice, S., & Roy, N. (2011). RANGE
- robust autonomous navigation in GPS-denied environ-
ments. Journal of Field Robotics, 28(5), 644–666.
Bachrach, A., He, R., & Roy, N. (2009). Autonomous flight in
unknown indoor environments. International Journal of
Micro Air Vehicles, 1(4), 217–228.
Beyeler, A., Zufferey, J.-C., & Floreano, D. (2009). Vision-based
control of near-obstacle flight. Autonomous Robots, 27(3),
201–219.
Byrne, J., Cosgrove, M., & Mehra, R. (2006). Stereo based obsta-
cle detection for an unmanned air vehicle. In Proceedings
of the IEEE International Conference on Robotics and Au-
tomation (ICRA) (pp. 2830–2835), Orlando, FL.
Choset, H., Lynch, K., Hutchinson, S., Kantor, G., Burgard,
W., Kavraki, L., & Thrun, S. (2005). Principles of Robot
Motion: Theory, Algorithms, and Implementations. Cam-
bridge, MA: MIT Press.
Clothier, R., Baumeister, R., Br¨unig, M., Duggan, A., & Wilson,
M. (2011). The Smart Skies project. IEEE Aerospace and
Electronic Systems Magazine, 26(6), 14–23.
Conroy, J., Gremillion, G., Ranganathan, B., & Humbert, J.
(2009). Implementation of wide-field integration of optic
flow for autonomous quadrotor navigation. Autonomous
Robots, 27(3), 189–198.
Dauer, J., Lorenz, S., & Dittrich, J. (2011). Advanced attitude
command generation for a helicopter UAV in the context
of a feedback free reference system. In AHS International
Specialists Meeting on Unmanned Rotorcraft (pp. 1–12),
Tempe, AZ.
Garratt, M., & Chahl, J. (2008). Vision-based terrain following
for an unmanned rotorcraft. Journal of Field Robotics, 25(4-
5), 284–301.
Griffiths, S., Saunders, J., Curtis, A., Barber, B., McLain, T., &
Beard, R. (2006). Maximizing miniature aerial vehicles—
Obstacle and terrain avoidance for MAVs. IEEE Robotics
and Automation Magazine, 13(3), 34–43.
Harel, D. (1987). Statecharts: A visual formalism for complex
systems. Science of Computer Programming, 8(3), 231–
274.
He, R., Bachrach, A., & Roy, N. (2010). Efficient planning un-
der uncertainty for a target-tracking micro aerial vehicle.
In Proceedings of the IEEE International Conference on
Robotics and Automation (ICRA) (pp. 1–8), Anchorage,
AK.
Herisse, B., Hamel, T., Mahony, R., & Russotto, F. (2010). A
terrain-following control approach for a VTOL unmanned
aerial vehicle using average optical flow. Autonomous
Robots, 29(3-4), 381–399.
Hrabar, S. (2012). An evaluation of stereo and laser-based range
sensing for rotorcraft unmanned aerial vehicle obstacle
avoidance. Journal of Field Robotics, 29(2), 215–239.
Hrabar, S., & Sukhatme, G. (2009). Vision-based navigation
through urban canyons. Journal of Field Robotics, 26(5),
431–452.
Johnson, E. N., Mooney, J. G., Ong, C., Hartman, J., & Sa-
hasrabudhe, V. (2011). Flight testing of nap of-the-earth
unmanned helicopter systems. In Proceedings of the 67th
Annual Forum of the American Helicopter Society (pp.
1–13), Virginia Beach, VA.
Kendoul, F. (2012). A survey of advances in guidance, naviga-
tion and control of unmanned rotorcraft systems. Journal
of Field Robotics, 29(2), 315–378.
Meier, L., Tanskanen, P., Heng, L., Lee, G. H., Fraundorfer, F.,
& Pollefeys, M. (2012). PIXHAWK: A micro aerial vehi-
cle design for autonomous flight using onboard computer
vision. Autonomous Robots, 33(1-2), 21–39.
Journal of Field Robotics DOI 10.1002/rob
Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 33
Merz, T., & Kendoul, F. (2011). Beyond visual range obsta-
cle avoidance and infrastructure inspection by an au-
tonomous helicopter. In IEEE/RSJ International Confer-
ence on Intelligent Robots and Systems (IROS) (pp. 4953–
4960), San Francisco, CA.
Merz, T., Rudol, P., & Wzorek, M. (2006). Control system frame-
work for autonomous robots based on extended state ma-
chines. In IARIA International Conference on Autonomic
and Autonomous Systems (ICAS) (pp. 1–8), Silicon Valley,
CA.
Montgomery, J., Johnson, A., Roumeliotis, S., & Matthies, L.
(2006). The jet propulsion laboratory autonomous he-
licopter testbed: A platform for planetary exploration
technology research and development. Journal of Field
Robotics, 23(3/4), 245–267.
Ruffier, F., & Franceschini, N. (2005). Optic flow regulation:
The key to aircraft automatic guidance. Robotics and Au-
tonomous Systems, 50(4), 177–194.
Sanfourche, M., Besnerais, G. L., Fabiani, P., Piquereau, A., &
Whalley, M. (2009). Comparison of terrain characterization
methods for autonomous UAVs. In Proceedings of the 65th
Annual Forum of the American Helicopter Society (pp. 1–
14), Grapevine, TX.
Sanfourche, M., Delaune, J., Besnerais, G. L., de Plinval, H.,
Israel, J., Cornic, P., Treil, A., Watanabe, Y., & Plyer, A.
(2012). Perception for UAV: Vision-based navigation and
environment modeling. Journal Aerospace Lab, 4, 1–19.
Scherer, S., Rehder, J., Achar, S., Cover, H., Chambers, A.,
Nuske, S., & Singh, S. (2012). River mapping from a fly-
ing robot: State estimation, river detection, and obstacle
mapping. Autonomous Robots, 33(1-2), 189–214.
Scherer, S., Singh, S., Chamberlain, L., & Elgersma, M. (2008).
Flying fast and low among obstacles: Methodology and
experiments. International Journal of Robotics Research,
27(5), 549–574.
Shim, D., Chung, H., & Sastry, S. (2006). Conflict-free naviga-
tion in unknown urban environments. IEEE Robotics and
Automation Magazine, 13(3), 27–33.
Takahashi, M., Schulein, G., & Whalley, M. (2008). Flight con-
trol law design and development for an autonomous ro-
torcraft. In Proceedings of the 64th Annual Forum of the
American Helicopter Society (pp. 1652–1671), Montreal,
Canada.
Theodore, C., Rowley, D., Hubbard, D., Ansar, A., Matthies,
L., Goldberg, S., & Whalley, M. (2006). Flight trials of a
rotorcraft unmanned aerial vehicle landing autonomously
at unprepared sites. In Proceedings of the 62nd Annual
Forum of the American Helicopter Society (pp. 1–15),
Phoenix, AZ.
Tsenkov, P., Howlett, J., Whalley, M., Schulein, G., Takahashi,
M., Rhinehart, M., & Mettler, B. (2008). A system for
3D autonomous rotorcraft navigation in urban environ-
ments. In Proceedings of the AIAA Guidance, Navigation,
and Control Conference and Exhibit (pp. 1–23), Honolulu,
HI.
Viquerat, A., Blackhall, L., Reid, A., Sukkarieh, S., & Brooker, G.
(2007). Reactive collision avoidance for unmanned aerial
vehicles using Doppler radar. In Proceedings of the Inter-
national Conference on Field and Service Robotics (FSR)
(pp. 245–254, Chamonix, France.
Whalley, M., Takahashi, M., Tsenkov, P., & Schulein, G. (2009).
Field-testing of a helicopter UAV obstacle field naviga-
tion and landing system. In Proceedings of the 65th An-
nual Forum of the American Helicopter Society (pp. 1–8),
Grapevine, TX.
William, B., Green, E., & Oh, P. (2008). Optic-flow-based colli-
sion avoidance. IEEE Robotics and Automation Magazine,
15(1), 96–103.
Zelenka, R., Smith, P., Coppenbarger, R., Njaka, C., & Sridhar,
B. (1996). Results from the NASA automated nap-of-the-
earth program. In Proceedings of the 52nd Annual Forum
of the American Helicopter Society (pp. 107–115), Wash-
ington, D.C.
Zufferey, J.-C., & Floreano, D. (2006). Fly-inspired visual steer-
ing of an ultralight indoor aircraft. IEEE Transactions On
Robotics, 22(1), 137–146.
Journal of Field Robotics DOI 10.1002/rob

More Related Content

PPTX
An autonomous uav with an optical flow sensor
PPTX
Lidar : light detection and rangeing
PPTX
PPT
Satrack - missile guidance system
PDF
Mechanization and error analysis of aiding systems in civilian and military v...
PPTX
LiDAR technology
PPTX
Positioning in Location Based Services
PDF
Autonomous Navigation of UGV Based on AHRS, GPS and LiDAR
An autonomous uav with an optical flow sensor
Lidar : light detection and rangeing
Satrack - missile guidance system
Mechanization and error analysis of aiding systems in civilian and military v...
LiDAR technology
Positioning in Location Based Services
Autonomous Navigation of UGV Based on AHRS, GPS and LiDAR

What's hot (16)

PPTX
PPTX
Cse satrack ppt
PDF
An Enhanced Predictive Proportion using TMP Algorithm in WSN Navigation
PPTX
Optical Coherence Tomography
PDF
LiDAR Data Processing and Classification
PDF
MSc Defence PPT
PPT
VISION AIDED NAVIGATION, LANDING AND SURVIELLANCE CONTROL
PPTX
Satrack
PPTX
Emerging new imaging_techniques_2014
PPTX
Lidar
PPTX
Lidar and sensing
DOCX
Lidar in-ieee-format
PPTX
Lidar final ppt
PDF
Chetan soni
PPTX
Light Detection And Ranging (useless in slideshare, must be downloaded to pow...
Cse satrack ppt
An Enhanced Predictive Proportion using TMP Algorithm in WSN Navigation
Optical Coherence Tomography
LiDAR Data Processing and Classification
MSc Defence PPT
VISION AIDED NAVIGATION, LANDING AND SURVIELLANCE CONTROL
Satrack
Emerging new imaging_techniques_2014
Lidar
Lidar and sensing
Lidar in-ieee-format
Lidar final ppt
Chetan soni
Light Detection And Ranging (useless in slideshare, must be downloaded to pow...
Ad

Viewers also liked (6)

PDF
Lane Detection and Obstacle Aviodance
DOCX
Wireless power transmission doc/sanjeet-1308143
DOCX
Wireless power transmission wpt Saminor report final
PDF
Wireless Power Transfer Project
PDF
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
PDF
WIRELESS POWER TRANSMISSION Project
Lane Detection and Obstacle Aviodance
Wireless power transmission doc/sanjeet-1308143
Wireless power transmission wpt Saminor report final
Wireless Power Transfer Project
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
WIRELESS POWER TRANSMISSION Project
Ad

Similar to Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas (20)

PDF
Helicopter With Gps
PDF
Lidar obstacle-warning-and-avoidance-system-for-unmanned-aircraft
PPTX
quadcopter with ostacle avoidance.pptx
PPTX
Detecting and Avoiding Frontal Obstacles from a Monocular Camera for Micro Un...
PDF
Surveying Areas in Developing Regions Through Context Aware Drone Mobility
PPTX
The Incredible Potential of Unmanned Aerial Systems Technology...and Why You ...
PDF
Arduino based UAV Controlled By Dedicated RF Remote Control
PDF
Sensor Integration and Data Fusion from a High Definition Helicopter Mapping ...
PPTX
NSIT Unmanned Aerial Vehicle Orientation
PDF
Small Unmanned Aircraft Theory and Practice Randal W. Beard
PPTX
(Reading Group) First Results in Detecting and Avoiding Frontal Obstacles fro...
PDF
D012271619
PDF
IRJET- Design and Fabrication of Hexacopter for Surveillance
PDF
Timmins how would you like to use ua vs
PDF
A Review on Longitudinal Control Law Design for a Small Fixed-Wing UAV
PDF
1569909951 (2)
PDF
Arte Y Robotica
DOCX
A project report on
PPTX
Autonomous aerial vehicle based on computer vision
PDF
thesis-2
Helicopter With Gps
Lidar obstacle-warning-and-avoidance-system-for-unmanned-aircraft
quadcopter with ostacle avoidance.pptx
Detecting and Avoiding Frontal Obstacles from a Monocular Camera for Micro Un...
Surveying Areas in Developing Regions Through Context Aware Drone Mobility
The Incredible Potential of Unmanned Aerial Systems Technology...and Why You ...
Arduino based UAV Controlled By Dedicated RF Remote Control
Sensor Integration and Data Fusion from a High Definition Helicopter Mapping ...
NSIT Unmanned Aerial Vehicle Orientation
Small Unmanned Aircraft Theory and Practice Randal W. Beard
(Reading Group) First Results in Detecting and Avoiding Frontal Obstacles fro...
D012271619
IRJET- Design and Fabrication of Hexacopter for Surveillance
Timmins how would you like to use ua vs
A Review on Longitudinal Control Law Design for a Small Fixed-Wing UAV
1569909951 (2)
Arte Y Robotica
A project report on
Autonomous aerial vehicle based on computer vision
thesis-2

Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas

  • 1. Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Torsten Merz and Farid Kendoul Autonomous Systems Laboratory, CSIRO ICT Centre, 1 Technology Court, Pullenvale, Queensland 4069, Australia e-mail: torsten.merz@csiro.au, farid.kendoul@csiro.au Received 10 July 2012; accepted 15 February 2013 This paper presents a system enabling robotic helicopters to fly safely without user interaction at low altitude over unknown terrain with static obstacles. The system includes a novel reactive behavior-based method that guides rotorcraft reliably to specified locations in sparsely occupied environments. System dependability is, among other things, achieved by utilizing proven system components in a component-based design and incorporating safety margins and safety modes. Obstacle and terrain detection is based on a vertically mounted off-the-shelf two-dimensional LIDAR system. We introduce two flight modes, pirouette descent and waggle cruise, which extend the field of view of the sensor by yawing the aircraft. The two flight modes ensure that all obstacles above a minimum size are detected in the direction of travel. The proposed system is designed for robotic helicopters with velocity and yaw control inputs and a navigation system that provides position, velocity, and attitude information. It is cost effective and can be easily implemented on a variety of helicopters of different sizes. We provide sufficient detail to facilitate the implementation on single-rotor helicopters with a rotor diameter of approximately 1.8 m. The system was extensively flight-tested in different real-world scenarios in Queensland, Australia. The tests included flights beyond visual range without a backup pilot. Experimental results show that it is feasible to perform dependable autonomous flight using simple but effective methods. C 2013 Wiley Periodicals, Inc. 1. INTRODUCTION In airborne remote sensing, flights are conducted at low alti- tude or close to obstacles if sensors must be placed at a short distance from objects of interest due to, among other things, limited spatial sensor resolution, limited sensing range, oc- clusion, or atmospheric disturbance. Applications of air- borne remote sensing in rural areas include crop monitoring and inspection of farm infrastructure. In addition to require- ments in remote sensing, operating unmanned aircraft close to terrain and obstacles decreases the risk of collisions with manned aircraft, which usually operate at higher altitude and clear of obstacles. Low-altitude flights close to obstacles are performed more easily with rotorcraft than with fixed-wing aircraft due to their ability to fly at any low speed. Unmanned air- craft are attractive because using manned aircraft is often more expensive and hazardous for such operations. While smaller unmanned rotorcraft such as electric multirotors are sufficient for some applications, often larger aircraft are required for traveling longer distances and carrying heavier sensors. However, operations of larger unmanned helicopters are currently constrained by the requirement for Direct correspondence to: Torsten Merz e-mail: torsten.merz@ csiro.au. skilled and possibly certified pilots and reliable communi- cation links, especially for operations beyond visual range in unknown environments. This paper presents the LAOA (low-altitude obstacle avoidance) system. Its goal is to guide a robotic helicopter such that it arrives at a specified location without human interaction and without causing damage to the environ- ment or the aircraft. There is no requirement regarding the trajectory. Safety is an important system requirement as es- pecially larger helicopters may be hazardous. In addition to being safe, the system should reliably guide the helicopter to a specified location. Safety and reliability are both at- tributes of dependability. System dependability has been the primary requirement of our work. System performance in terms of minimal task execution time or minimal travel distance has been a secondary requirement. In addition to being dependability, we aimed for a cost-effective, generic system that can be implemented in a relatively short time. The LAOA system enables safe autonomous operations of robotic helicopters in environments with an unknown terrain profile and unknown obstacles under the following assumptions: (1) there is no other aircraft operating in the area and ob- stacles are static, (2) there are no overhead obstacles, Journal of Field Robotics, 1–33 C 2013 Wiley Periodicals, Inc. View this article online at wileyonlinelibrary.com • DOI: 10.1002/rob.21455
  • 2. 2 • Journal of Field Robotics—2013 Inspection camera Flight and navigation computers 2D LIDAR system Figure 1. One of CSIRO’s unmanned helicopters with an inte- grated LAOA system. The helicopter is configured for inspec- tions of vertical structures. (3) there are no obstacles smaller or thinner than the system can detect at the minimum stopping distance, and (4) the base helicopter system is serviceable and operated within specified weather limitations. We assume the base helicopter system includes a con- trol and navigation system as specified in this paper. The LAOA system makes use of the particular flight proper- ties of helicopters. It is designed for a variety of helicopters of different sizes. We have tested it on a small unmanned single-rotor helicopter (Figure 1). Reactive behavior-based methods have been success- fully applied in many areas of robotics, but their potential has not been explored much for robotic helicopters per- forming real-world tasks. Given our task specification, we decided it was worthwhile investigating. For the obstacle avoidance part, a reactive navigation approach without a global planning component was chosen because (1) suffi- ciently accurate and current maps are often not available for the mission areas, (2) mapping and planning during the flight does not necessarily increase efficiency of mission ex- ecution in rural areas,1 and (3) a reactive approach helps to reduce computational resources required for real-time implementation. Range information from LIDAR (light de- tection and ranging, also LADAR) is used to create stimuli for the reactive system and for height control during ter- rain following. We decided to utilize LIDAR technology be- cause from our experience with different sensing options, a LIDAR-based approach was likely to give us the best results for terrain and obstacle detection. 1The number of obstacles encountered during remote sensing mis- sions in rural areas is assumed to be relatively low. Our main contributions are (1) a novel terrain and ob- stacle detection method for helicopters using a single off- the-shelf two-dimensional (2D) LIDAR system and yaw mo- tion of the helicopter to extend its field of view; (2) a novel computationally efficient, reactive behavior-based method for goal-oriented obstacle and terrain avoidance optimized for rotorcraft; (3) details of the implementation on a small unmanned helicopter and results of extensive flight testing; (4) evidence that it is feasible to conduct autonomous goal- oriented low-altitude flights dependably with simple but effective methods. The proposed methods are particularly suitable for approaching vertical structures at low altitude. All experiments were conducted in unstructured unknown outdoor environments. The paper is structured as follows: In the next sec- tion we discuss related work. Section 3 provides a system overview. The methods for detecting terrain and obstacles are described in Section 4. Section 5 provides a description of the flight modes of the LAOA system. The strategies for goal-oriented obstacle avoidance are described in Section 6. Experimental results of the field-tested system are pro- vided in Section 7. Section 8 concludes with a summary including system limitations and future work. Nomencla- ture and technical details about the implemented system can be found in the Appendix. 2. RELATED WORK Developing autonomous rotorcraft systems with onboard terrain-following and obstacle-avoidance capabilities is an active research area. A variety of approaches to the terrain- following and obstacle-avoidance problems exist, and many papers have been published. The recent survey (Kendoul, 2012) provides a comprehensive survey of rotary-wing un- manned aircraft system (UAS) research, including terrain following and obstacle detection and avoidance. In this section, we briefly review related work and present ma- jor achievements and remaining challenges in these areas of research. 2.1. Sensing Technologies for Obstacle and Terrain Detection The sensing technologies commonly used onboard un- manned aircraft are computer vision (passive sensing) and LIDAR (active sensing). Cameras or electro-optics sensors are a popular approach for environment sensing because they are light, passive, compact, and provide rich infor- mation about the aircraft’s self-motion and its surrounding environment. Different types of imaging sensors have been used to address the mapping and obstacle-detection prob- lems. Stereo imaging systems have the advantage of pro- viding depth images and ranges to obstacles and have been used on rotary-wing UASs for obstacle detection and map- ping (Andert and Adolf, 2009; Byrne et al., 2006; Hrabar, Journal of Field Robotics DOI 10.1002/rob
  • 3. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 3 2012; Theodore et al., 2006). Monocular vision (single cam- era) has also been used as the main perception sensor in different projects (Andert et al., 2010; Montgomery et al., 2006; Sanfourche et al., 2009). Recently, optic flow sensors have emerged as an alternative sensing technology for ob- stacle detection and avoidance onboard small and mini un- manned aircraft (Beyeler et al., 2009; Ruffier and Frances- chini, 2005; William et al., 2008). Some researchers have also investigated the use of wide field-of-view imaging systems such as fisheye and omnidirectional cameras for obstacle de- tection indoors (Conroy et al., 2009) and outdoors (Hrabar and Sukhatme, 2009). The drawbacks of vision-based ap- proaches are their sensitivity to ambient light and scene tex- ture. Furthermore, the complexity of image-processing al- gorithms makes a real-time implementation on low-power embedded computers challenging. LIDAR is a suitable technology for mapping and obsta- cle detection since it directly measures the range by scanning a laser beam in the environment and measuring distance through time-of-flight or interference. LIDAR systems out- perform vision systems in terms of accuracy and robustness to ambient lighting and scene texture. Furthermore, map- ping the environment and detecting obstacles from LIDAR range data is less complex than doing so from intensity images. Indeed, most successful results and major achieve- ments in obstacle field navigation for unmanned rotorcraft have been achieved using LIDAR systems (He et al., 2010; Scherer et al., 2008; Shim et al., 2006; Tsenkov et al., 2008). Despite the numerous benefits of LIDAR systems, they suf- fer from some problems. They are generally heavier than cameras and require more power (active sensors), which make their integration in smaller aircraft with limited pay- load challenging. LIDAR systems are also sensitive to some environmental conditions such as rain and dust, and they can be blinded by sun. The main drawback of off-the-shelf LIDAR systems is their limited field of view. Indeed, most commercially available LIDAR systems only perform line scans. For 3D navigation, these 2D LIDAR systems have been mounted on nodding or rotating mechanisms when used on rotorcraft (Scherer et al., 2012; Takahashi et al., 2008). A few compact 3D LIDAR systems exist but they are not commercially available, such as the one from Fib- erteck Inc. (Scherer et al., 2008), or they are very expensive and heavy, such as the Velodyne 3D LIDAR system. Flash LIDAR cameras or 3D time-of-flight (TOF) cam- eras are a promising emerging 3D sensing technology that will certainly increase the perception capabilities of robots. Unlike traditional LIDAR systems that scan a collimated laser beam over the scene, Flash LIDAR cameras illumi- nate the entire scene with diffuse laser light and compute time-of-flight for every pixel in an imager, thereby result- ing in a dense 3D depth image. Recently, several companies started offering Flash LIDAR cameras commercially, such as the SwissRanger SR4000 (510 g) from MESA Imaging AG, Canesta 3D cameras, TigerEye 3D Flash LIDAR (1.6 kg) from Advanced Scientific Concepts Inc., the Ball Aerospace 5th Gen- eration Flash LIDAR, etc. However, they are either heavy and expensive or small but with very limited range (10 m for SwissRanger SR4000) and often not suitable in outdoor environments. Radar is the sensor of choice for long-range collision and obstacle detection in larger aircraft. Radar provides near-all-weather broad area imagery. However, for integra- tion in smaller unmanned aircraft, most radar systems are less suitable due to their size, weight, and power consump- tion. Moreover, they are quite expensive. There are a few smaller radar systems such as the Miniature Radar Altime- ter (MRA) Type 1 from Roke Manor Research Ltd., which weighs only 400 g and has a range of 700 m. We are not aware of any work by an academic research group on the use of radar onboard unmanned rotorcraft for obstacle and colli- sion avoidance. In Viquerat et al. (2007), work was reported using radar onboard a fixed-wing unmanned aircraft. The use of other ranging sensors such as ultrasonic and infrared sensors has been limited to a few indoor flights or ground detection during autonomous landing. Since most of these sensing technologies did not meet our requirement of developing a dependable and cost- effective obstacle-avoidance system for an unmanned heli- copter in a relatively short time, we based our system upon an already proven small-sized 2D LIDAR system. For the reasons we discuss in Section 3, we decided to use the mo- tion of the helicopter itself to extend the field of view of the LIDAR system for 3D perception rather than using a nodding or rotating mechanism. 2.2. Pirouette Descent and Waggle Cruise Flight Modes One of the main contributions of our work is the intro- duction of two special flight modes (pirouette descent and waggle cruise) for extending the field of view of the 2D LI- DAR system as described in Section 5. We have not found a description of similar flight modes in the literature except for the work presented in Dauer et al. (2011). Indeed, re- searchers from the German Aerospace Center (DLR) have considered the problem of flying a linear path while con- stantly changing the helicopter heading or yaw to aim the mission sensor (e.g., camera) on its target. They have pro- posed a quaternion-based approach for attitude command generation and control for a helicopter UAS, and they eval- uated its performance in a hardware-in-the-loop (HIL) sim- ulation environment for an elliptic turn maneuver (similar to the waggle cruise flight mode described in Section 5.7). Although the approach presented in Dauer et al. (2011) re- sults in better tracking accuracy, especially for relatively high forward speeds and high yawing rates, the control approach used in our work resulted in satisfactory results for the intended application. This is because of the low for- ward speed of the helicopter which is mainly imposed by Journal of Field Robotics DOI 10.1002/rob
  • 4. 4 • Journal of Field Robotics—2013 the limited sensing range of the LIDAR system we have been using. 2.3. Terrain-following Systems A closed-loop terrain following system allows aircraft to automatically maintain a relatively constant altitude above ground level. This technology is primarily used by mili- tary aircraft during nap-of-the-earth (NOE) flight to take advantage of terrain masking and avoid detection by en- emy radar systems. However, terrain following is also a useful capability for civilian UASs. For example, for low- altitude remote sensing flights with fixed focal cameras, it is often required to capture images of objects on the ground with constant resolution. Furthermore, terrain following is a useful method for approaching short vertical inspec- tion targets such as farm windpumps at a low but safe height. As for obstacle avoidance, terrain following can be achieved with reactive or mapping-based approaches us- ing passive (e.g., vision) or active sensors (e.g., LIDAR). Bio-inspired optic flow methods have been investigated for terrain following and were demonstrated only on small in- door rotorcraft such as quadrotors (Herisse et al., 2010) and a 100 g tethered rotorcraft (Ruffier and Franceschini, 2005). An interesting result on outdoor terrain following the use of optic flow has been reported in Garratt and Chahl (2008). The developed system has been implemented onboard a Yamaha RMAX helicopter, and allowed it to maintain 1.27 m clearance from the ground at a speed of 5 m/s, using height estimates from optic flow and GPS velocities. The LIDAR-based localization and mapping system, developed by MIT researchers (Bachrach et al., 2009) for au- tonomous indoor navigation of a small quadrotor in GPS- denied environments, included a component related to ter- rain following. Some of the beams of a horizontally mounted Hokuyo LIDAR system were deflected down to estimate and control height above the ground plane. Reactive terrain following provides computationally efficient ground detection and avoidance capabilities with- out the need for mapping and planning. However, the ap- proach has some limitations. With only limited knowledge of the terrain profile ahead, sensing limitations of the ob- stacle detection system, and a limited flight envelope, the maximum safe speed of the aircraft is generally lower com- pared to an approach that can make use of maps. The flight path may also suffer from unnecessary aggressive height changes when flying above nonsmooth terrain with large variation and discontinuities in its profile. An alternative to reactive terrain following is ground detection and avoidance through mapping and path plan- ning (Scherer et al., 2008; Tsenkov et al., 2008). These ap- proaches are more general than reactive terrain-following methods because they are able to perform terrain following as well as low-altitude flights without the need to maintain a constant height above the ground. However, they require accurate position estimates relative to the reference frame of an accurate map, and they are complex and generally com- putationally more expensive than the reactive methods. We show that the reactive method we propose performs well for the operations we envisage. 2.4. Obstacle-avoidance Systems and Algorithms A variety of approaches to the obstacle-avoidance prob- lem onboard unmanned rotorcraft exist. They can be classi- fied into two main categories: SMAP-based approaches and SMAP-less techniques (Kendoul, 2012). In the SMAP (simul- taneous mapping and planning) framework, mapping and planning are jointly performed to build a map of the envi- ronment, which is then used for path planning. SMAP-less obstacle avoidance strategies are generally reactive without the need for a map or a global path-planning algorithm. 2.4.1. SMAP-less Approaches SMAP-less techniques aim at performing navigation and obstacle avoidance with reactive methods without mapping and global path planning. Reactive obstacle detection and avoidance algorithms operate in a timely fashion and com- pute one action in every instant based on the current context. They use immediate measurements of the obstacle field to take a reactive response to prevent last-minute collisions by stopping or swerving the vehicle when an obstacle is known to be in the trajectory. However, it is often difficult to prove completeness of reactive algorithms for reaching a goal (if a path exists), especially for systems with uncertain- ties in perception and control. Completeness proofs exist for some algorithms such as the Bug2 algorithm (Choset et al., 2005), which is similar to our second avoidance strategy we describe in Section 6. In the literature, most SMAP-less approaches are vision-based, where obstacles are detected and avoided using optic flow (Beyeler et al., 2009; Conroy et al., 2009; Hrabar and Sukhatme, 2009; William et al., 2008; Zufferey and Floreano, 2006) or a priori knowledge of some character- istics of the obstacle such as color and shape (Andert et al., 2010). Bio-inspired methods that use optic flow have been popular because of their simplicity and the low weight of the required hardware. This is an active research area that can advance the state of the art in rotary-wing UAS 3D navigation. Promising and successful results have al- ready been obtained when using optic flow for obstacle avoidance [Zufferey and Floreano (2006), indoor 30 g fixed- wing UAS; William et al. (2008), indoor MAV; Hrabar and Sukhatme (2009), outdoor Bergen helicopter; Beyeler et al. (2009), outdoor fixed-wing UAS; and Conroy et al. (2009), indoor quadrotor]. These methods are very powerful and provide an interesting alternative for both perception and Journal of Field Robotics DOI 10.1002/rob
  • 5. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 5 guidance onboard mini UAS with limited payload. How- ever, the problem of robust optic-flow computation in real time and obstacle detection in natural environments is still a challenge and an open research area. Reactive obstacle avoidance based on a priori knowl- edge of some characteristics of the obstacle has been ap- plied in some projects. In Andert et al. (2010), for example, DLR researchers have developed a vision-based obstacle- avoidance system that allows a small helicopter to fly through gates that are identified by colored flags. This sys- tem was demonstrated in real time using a 12 kg robotic helicopter that crossed autonomously gates of 6 m × 6 m at a speed of 1.5 m/s without collisions. The system pre- sented in Hrabar and Sukhatme (2009) includes a forward- facing stereo camera to detect frontal obstacles. Based on a 3D point cloud, obstacles are detected in the upper half of the image using some distance threshold and region- growing algorithm. Once the obstacles have been detected, an appropriate evasive control command (turn away, stop) is generated. LIDAR systems have also been used for reactive obsta- cle avoidance onboard rotary-wing and fixed-wing UASs. Scherer et al. (2008) have developed a reactive obstacle avoidance system or local path planner that is based on a model of obstacle avoidance by humans. Their reac- tive system uses 3D LIDAR data expressed in the aircraft- centric spherical coordinates, and it can be combined with a global path planner. They have demonstrated results on a RMAX helicopter operating at low altitudes and in dif- ferent environments. In a recent work, Johnson et al. (2011) developed and flight-tested two reactive methods for ob- stacle and terrain avoidance to support nap-of-the-earth helicopter flight. The first method is similar to ours in the sense that it is based on a simple processing of each LI- DAR scan, whereas the second one employs the potential field technique. LIDAR-based reactive obstacle avoidance has been also applied for a small fixed-wing UAS such as the Brigham Young University (BYU) platform (Griffiths et al., 2006). One of the motivations of our work was to investigate the potential and effectiveness of using reactive obstacle- avoidance systems for achieving real-world applications in natural unknown environments without the need for map- ping and global path-planning algorithms. SMAP-less tech- niques are attractive because of their simplicity and real- time capabilities. However, reactive methods are prone to being incomplete (no path to the goal is found) and ineffi- cient in natural environments. The methods we propose re- liably guide the helicopter to a specified point and employ heuristics to cope with inefficiency and the local minima problem. The basic ideas of our work have been presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems in 2011 (Merz and Kendoul, 2011). In compar- ison to the conference paper, this paper provides a more detailed description of the system, its underlying methods, and the experiments we conducted. The level of detail is suf- ficient to facilitate the implementation on helicopters similar to the one used for our experiments. Moreover, we provide experimental results that have not been published in the conference paper. 2.4.2. SMAP-based Approaches SMAP-based approaches have been proven to be effective and efficient for dealing with obstacles in many types of un- known environments. However, there are environments in which a reactive method would perform equally well if not better as no maps need to be built. Moreover, SMAP-based approaches are computationally expensive, especially those based on computer vision. Although many papers have been published on vision- based obstacle avoidance for rotorcraft, very few systems have been implemented on an actual aircraft, and modest experimental results have been reported in the literature. In Andert and Adolf (2009), stereo vision has been used to build a world representation that combines occupancy grids and polygonal features. Experimental results on mapping are presented in the paper, but there are no results about path planning and obstacle avoidance. A similar system was described in Meier et al. (2012), where stereo vision was used for mapping and obstacle avoidance onboard a small quadrotor UAS. Another stereo vision-based system for rotorcraft is described in Byrne et al. (2006). It com- bines block-matching stereo (depth image) with image seg- mentation based on graph representation appropriate for obstacle detection. This system was demonstrated in real time using Georgia Tech’s Yamaha RMAX helicopter. In San- fourche et al. (2009) and Montgomery et al. (2006), monoc- ular stereo vision has been used to map the terrain and to select a safe landing area for an unmanned helicopter. From the reviewed literature, we found that there are no suc- cessful implementations of vision-based methods onboard rotary-wing UASs for obstacle avoidance using the SMAP framework. Major significant achievements in 3D navigation and obstacle avoidance by unmanned rotorcraft have been ob- tained using LIDAR systems and a SMAP-based approach. Most successful implementations on rotary-wing UASs are probably the ones by CMU (Scherer et al., 2008), U.S. army/NASA (Tsenkov et al., 2008; Whalley et al., 2009), Berkeley University (Shim et al., 2006), and MIT (Bachrach et al., 2009). Some experimental results on using LIDAR sys- tems onboard an unmanned helicopter for obstacle avoid- ance were reported in Shim et al. (2006), where the BEAR team developed a 2D obstacle-avoidance algorithm that combines local obstacle maps for perception and a hier- archical model predictive controller for path planning and flight control. Equipped with this system, a Yamaha R-50 he- licopter was able to detect 3 m × 3 m canopies (simulating Journal of Field Robotics DOI 10.1002/rob
  • 6. 6 • Journal of Field Robotics—2013 urban obstacles), to plan its path, and to fly around obstacles to reach the goal waypoint at a nominal speed of 2 m/s. In Scherer et al. (2008), researchers from CMU have addressed the problem of flying relatively fast at low altitudes in cluttered environments relying on online LIDAR-based sensing and mapping. Their approach com- bines a slower 3D global path planner that continuously replans the path to the goal based on the perceived envi- ronment with a faster 3D local collision avoidance algo- rithm that ensures that the vehicle stays safe. A custom 3D LIDAR system from Fibertek Inc. was integrated into a Yamaha RMAX helicopter and used as the main percep- tion sensor. The system has been extensively flight-tested in different sites with different obstacles and flight speeds. More than 700 successful obstacle-avoidance runs were per- formed where the helicopter avoided autonomously build- ings, trees, and thin wires. Other impressive results for rotary-wing UAS SMAP- based obstacle avoidance are reported in Whalley et al. (2009). The U.S. Army/NASA rotorcraft division has de- veloped an Obstacle Field Navigation (OFN) system for rotary-wing UASs low-altitude flights in urban environ- ments. A SICK LIDAR system was mounted on a spinning mechanism and used to generate 3D maps (obstacle prox- imity map and grid height map) of the environment. Two 3D path-planning algorithms have been proposed using a 2D A* grid-search algorithm on map slices for the first one, and a 3D A* algorithm on a height map (Tsenkov et al., 2008). This OFN system has been implemented on a Yamaha RMAX helicopter and demonstrated in a number of real obstacle-avoidance scenarios and environments. More than 125 flight tests were conducted at different sites to avoid nat- ural and man-made obstacles at speeds that ranged from 1 to 4 m/s. While previous systems have been developed mainly for outdoor navigation and unmanned helicopters, the sys- tem presented in Bachrach et al. (2009) and Bachrach et al. (2011) is designed for mini rotorcraft such as quadrotors fly- ing in indoor GPS-denied environments. The proposed sys- tem is based on stereo vision and the Hokuyu LIDAR system for localization and mapping. When this navigation system was used with a planning and exploration algorithm, the quadrotor was able to autonomously navigate (motion esti- mation, mapping, and planning) in open lobbies, cluttered environments, and office hallway environments. Another LIDAR-based mapping system for small multirotor UASs is presented in Scherer et al. (2012). It is based on an off-axis spinning Hokuyu LIDAR system that is used to create a 3D evidence grid of the riverine environment. Experimental re- sults along a 2 km loop of river using a surrogate perception payload on a manned boat are presented. For a comprehensive literature review of SMAP-based approaches and environment mapping for UAS naviga- tion, we refer the motivated reader to the survey papers of Kendoul (2012) and Sanfourche et al. (2012). 2.4.3. Other Approaches In Zelenka et al. (1996), the authors present results of semiautonomous nap-of-the-earth flights of full-sized he- licopters. The research included vision, radar, and LIDAR- based approaches for terrain and obstacle detection and different avoidance methods using information from the detection system and a terrain database. In Hrabar (2012), the author combines stereo vision and LIDAR for static obstacle avoidance for unmanned rotor- craft. 3D occupancy maps are generated online using range data from a stereo vision system, a 2D LIDAR system, and both at the same time. A goal-oriented obstacle-avoidance algorithm is used to check the occupancy map for poten- tial collisions and to search for an escape point when an obstacle is detected along the current flight trajectory. The system has been implemented on one of CSIRO’s unmanned helicopters. It was tested in a number of flights and scenar- ios with a focus on evaluation and comparison for stereo vision and LIDAR-based range sensing for obstacle avoid- ance. However, the avoidance algorithm is prone to the local minima problem and, as the results show, the system is not suited for safe flights without a backup pilot. 3. SYSTEM OVERVIEW This section provides an overview of a helicopter sys- tem with an integrated LAOA system. We have used a component-based design approach for both software and hardware. Breaking down a complex system in individ- ual components has the advantage that individual com- ponents can be easily designed, tested, and certified. To maximize system dependability, we have utilized existing proven components in our design wherever possible. The three main system components are a base heli- copter system, a 2D LIDAR system, and the LAOA system. The description of base helicopter systems and LIDAR sys- tems is beyond the scope of this paper. Technical specifica- tions of the base system components we have been using can be found in Table V of the Appendix. The LAOA sys- tem is designed to be generic. It can be implemented on any robotic helicopter with velocity and yaw control inputs (see Section 5.2) and a navigation system that provides position, velocity, and attitude information. The system requires a number of parameters that are specific to the aircraft, the sensor and control system, and the environment. Most pa- rameters are determined by geometric considerations. The parameters we used in the experiments described in Section 7 are provided in the Appendix. Obstacle and terrain detection is based on a 2D LI- DAR system that is rigidly mounted on the helicopter as described in Section 4. There are several reasons why we chose a LIDAR-based approach: (1) LIDAR systems reli- ably detect objects within a suitable range and with suf- ficient resolution, (2) the systems produce very few false Journal of Field Robotics DOI 10.1002/rob
  • 7. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 7 Obstacle Detection Functions Terrain Following pD Functions Obstacle Avoidance ψ c h f df dd, ψ f g ψ f p f NE , , h hminhgnd Helicopter Navigation System2D LIDAR System Perception Functions Flight Control Guidance&Control goalPosition Helicopter Control System ,v c xyz goalHeight p f NE vNED , ψ, , i λi( ) State Machine ψ r , pNE , fwp φ, θ, ψ Terrain Detection φ, θ, ψ pNE , Figure 2. Structure of the LAOA system (see the Appendix for nomenclature). The obstacle-avoidance functions include func- tions for waypoint generation. The terrain-following functions include functions for vertical height change. The flight-control functions include functions for trajectory generation. positive detections in weather and environments in which we typically operate (dust-free and no rain or snowfall), and (3) obstacle and terrain detection based on range data requires relatively low processing effort. Compared with 3D LIDAR systems, 2D systems are widely commercially avail- able at a reasonable price and require fewer computational resources. In our approach, 3D information is obtained by using the motion of the helicopter to extend the field of view of the LIDAR system. We decided not to utilize a nodding or rotating mechanism for the following reasons: (1) the mech- anisms require extra payload capacity and electric power; (2) it is an additional component that could fail; (3) in most cases, a mechanism that is specific to a helicopter and a LI- DAR system must be custom-built, and the development of a dependable mechanism is time-consuming and expen- sive; (3) especially on smaller aircraft, it is often difficult to find a mounting point for a 3D LIDAR system without obstructing the field of view of the sensor. The structure of the LAOA system is shown in Figure 2. The user inputs to the system are a goal posi- tion (2D position) and a goal height (height above ground). Both are typically provided by a waypoint sequencer that reads predefined flight plans or plans that are generated by a global path planner. The cruise speed is a fixed sys- tem parameter that depends on other parameters and is not meant to be changed by the operator (see Table VI of the Appendix). The system is divided into a perception and a guidance & control part. The perception part is described in Section 4 and the guidance & control part in Sections 5 and 6. The interaction of system components is controlled by a state machine. State machines are a well-suited formalism for the spec- ification and design of large and complex reactive sys- tems (Harel, 1987). We have utilized the extended-state machine-based software framework ESM, which also facil- itates a component-based real-time implementation of the proposed methods (Merz et al., 2006). The main differences between the classical finite-state machine formalism and ESM is the introduction of hierarchical and concurrent states and data ports and paths for modeling data flow. The ma- jority of the methods proposed in this paper are described in state diagrams at a level of detail necessary to understand the behavior of the helicopter. A brief description of a subset of the ESM language that is used in this paper can be found in Table IX of the Appendix. The LAOA functions are implemented on the existing computers of the base helicopter system. The perception part is implemented on the navigation computer and the guidance & control part is implemented on the flight com- puter (see Table V of the Appendix). Both computers run a Linux operating system with a real-time kernel patch and the ESM run-time environment. The state machines are ex- ecuted at 200 Hz (clock frequency of transition trigger; see Table IX of the Appendix). All calculations of variables and events used in the state diagrams in this paper are exe- cuted within 0.5 ms on the specified hardware (assuming sensor readings are available in main memory). The maxi- mum latency for reacting on an (external) event is 5 ms and the control functions of the LAOA system are executed at 100 Hz. 4. OBSTACLE AND TERRAIN DETECTION Obstacle and terrain detection is based on range measure- ments from a 2D LIDAR system and attitude estimates from the navigation system of the helicopter. The LAOA system is designed for 2D LIDAR systems with a scan range of approximately 270◦ . A scan rate of approximately 40 Hz and a scan resolution of approximately half a degree are sufficient for operations similar to the ones described in Section 7 with the parameters specified in Table VI of the Appendix.2 The LIDAR system is mounted with the scan plane parallel to the xz-plane of the helicopter body frame and the 90◦ blind section facing backwards (see Figure 3). In our current implementation, we assume there are no obsta- cles above the helicopter. However, for future applications requiring detection of overhead obstacles, the LIDAR sys- tem is mounted with the scan area symmetrical to the x-axis 2During waggle cruise flight, the spatial scan resolution is approxi- mately 12 cm vertically and 61 cm horizontally at safeObstacleScan- Range distance and at highest yaw rate. Journal of Field Robotics DOI 10.1002/rob
  • 8. 8 • Journal of Field Robotics—2013 z Fitted line x i Reflection point ! ri xB hmin hgnd df Safe obstacle scan range detectionWindowSize detectionWindowSize Figure 3. Illustration of the LIDAR-based terrain and obstacle detection. of the body frame xB rather than more downward-oriented. A precise alignment with the reference frame of the nav- igation system is not required. Alignment errors of a few degrees are tolerated with the parameters specified in Table VI of the Appendix. LIDAR scans are synchronized with the attitude es- timates from the navigation system. Attitude estimates (φk, θk, ψk) are recorded at the time a sync signal is received from the LIDAR system (k referring to the kth scan). The sync signal indicates the completion of a scan. A LIDAR scan is processed after the sync signal has been received. We define a reflection point as a point in the environ- ment where the laser beam of the LIDAR system is reflected. A reflection point is assumed to be part of an obstacle. A LIDAR reading (ri, λi) is a reflection point expressed in po- lar coordinates relative to the x-axis of the body frame (see Figure 3; i referring to the ith reading with a valid range value). Assuming the helicopter is stationary during a scan,3 reflection points can be expressed in the leveled body frame of the helicopter4 using the recorded attitude estimates. For obstacle and terrain detection, only the x and z components of reflection points are used. A 2D reflection point in the leveled body frame is calculated as follows: xi zi = cos θk sin θk cos φk − sin θk cos θk cos φk ri cos λi ri sin λi . (1) We define S as the set of all reflection points (xi, zi) in a scan with a minimum distance from the sensor (ri ≥ 3The parameters of the avoidance functions listed in Table VI of the Appendix are chosen such that the error introduced by this assumption is accommodated for. 4The leveled body frame is the helicopter-carried NED frame ro- tated by the helicopter yaw angle around the N axis. minimalLidarRange). Readings with a shorter range are dis- carded as they are likely to be caused by the main rotor or insects. Apart from that, the detection of an obstacle within the specified short minimum distance would be too late to initiate an avoidance maneuver. 4.1. Obstacles For obstacle avoidance during forward flight, we only con- sider the closest obstacle in front of the helicopter. The clos- est frontal obstacle is defined as the 2D reflection point with the minimum x-component in a frontal detection window. The set of 2D reflection points Sf in the detection window and the horizontal distance df to the closest frontal obstacle are given by Sf = {(xi, zi) : |zi| ≤ 1 2 detectionWindowSize, (xi, zi) ∈ S}, (2) df = min{xi : (xi, zi) ∈ Sf}. (3) If Sf is empty, df is set to zero. For obstacle avoidance during descents, we also calcu- late the vertical distance to the closest obstacle below the helicopter: Sd = {(xi, zi) : |xi| ≤ farObstacleDistance, (xi, zi) ∈ S}, (4) dd = min{zi : (xi, zi) ∈ Sd}. (5) If Sd is empty, dd is set to zero. In our experiments, we did not filter reflection points for the calculation of df to ensure we detect smallest obsta- cles. The system also detects larger insects or birds. This may not be wanted as such animals either avoid the aircraft or they are small enough not to damage it. To make the sys- tem less susceptible to such objects, a temporary filter could Journal of Field Robotics DOI 10.1002/rob
  • 9. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 9 be applied. However, as the situation rarely occurred in our experiments, we did not investigate this option further. 4.2. Terrain For terrain following, the system requires a height estimate relative to the ground. We define the height above ground hgnd as the intercept of the least-squares fitted line to 2D reflection points in a detection window underneath the he- licopter.5 The set of 2D reflection points in the detection window is given by Sg = {(xi, zi) : |xi| ≤ 1 2 detectionWindowSize, |βi| ≤ 1 2 detectionWindowAngle, (xi, zi) ∈ S}, (6) where βi = atan2(xi, zi). The detectionWindowAngle condi- tion limits the number of samples used for the line fit. If there are fewer than two samples, no line fitting is per- formed and hgnd is set to zero. Apart from a height estimate, the line fit also provides an estimate of the slope angle of the terrain. However, we did not see the need to utilize terrain slope information for operations at lower cruise speed in a typical rural environ- ment. In addition to the height above ground, we calculate a minimum height hmin used for detecting terrain discontinu- ities during terrain following. The minimum height is given by hmin = min{zj : |zi − zj | ≤ terrainPointVariation, i = j, (xi, zi) ∈ Sg, (xj , zj ) ∈ Sg}. (7) If there are fewer than two elements in Sg, the hmin is set to zero. In Eq. (7), LIDAR readings are filtered by requiring at least two reflection points at a similar distance. The spatial filter prevents the terrain-following system from reacting to readings that are likely to be false-positive detections. 5. FLIGHT MODES 5.1. Mode Switching The LAOA system utilizes a hybrid control scheme with five flight modes: hover, climb, pirouette descent, yaw, and waggle cruise. The mode switching is modeled by a state machine. The state diagram on the left in Figure 4 shows possible transitions between flight modes. The central flight mode is hover. The hover mode is an atomic state (a state that does not encapsulate other states), whereas the four nonstationary flight modes (shown as superstates) are flight 5Above sloped terrain, the height value is larger than the distance of the helicopter to the closest terrain point. This is accommodated for by parameters of the terrain-following and avoidance functions listed in Table VI of the Appendix and the terrain discontinuity behavior described in Section 5.8. modes with several nested states. To ensure smooth switch- ing to the hover mode, nonstationary flight modes are only exited when the helicopter velocities are low and the atti- tude is normal. The nonstationary flight modes discussed in this pa- per consist of acceleration, run, deceleration, and stabiliza- tion states as depicted in the state diagram on the right in Figure 4. The events brakeThreshold, velocityReached, closeTo- Target, farFromTarget, and stableHover are sent from a con- current state machine that analyzes the helicopter state in relation to the reference values of the current flight mode. The lowVelocity event is sent from a concurrent state machine that monitors the velocity of the helicopter. The queryVelocity event is used to request an analysis of the current helicopter velocity. This is necessary as the lowVelocity event could be sent while the state machine is not in a state reacting to the event. A hoverMode event aborts a nonstationary flight mode. 5.2. Controllers The LAOA system requires an underlying external control system that tracks the longitudinal, lateral, and vertical he- licopter velocity reference vc xyz = (vc x, vc y, vc z) in the leveled body frame xB of the helicopter (see Section 4) and the yaw angle reference ψc . We assume the controllers are decou- pled and acceleration-limited. The maximum linear and an- gular accelerations are an order of magnitude higher than those required for linear and angular velocity changes com- manded by the LAOA system. The superscript ’c’ is used for references for the external control system. For other refer- ences, we use the superscripts ’f’ or ’v’ for fixed or variable values. The LAOA system includes three decoupled SISO PI position controllers Cpx (e), Cpy (e), and Cpz (e) where e is the control error. The position controllers produce the refer- ences for the external velocity controllers. The integral term is usually only required for compensation of steady state errors. We determined the gains of the position controllers empirically based on the critical points found through flight testing. Height is either defined as height above ground for low- altitude flights or height above takeoff point for flights be- yond safe detection range of the LIDAR system. The height above the takeoff point is measured with a barometric al- timeter with the reference pressure set to the pressure at the takeoff point. The vertical position is regulated inde- pendently of the horizontal position. The terrain-following behavior emerges from regulating the height above ground during cruise flight. 5.3. Hover Mode The hover mode requires a horizontal position reference pf NE in the earth-fixed NED frame and a height reference hf . Journal of Field Robotics DOI 10.1002/rob
  • 10. 10 • Journal of Field Robotics—2013 Figure 4. Flight modes of the LAOA system (left) and typical states of a nonstationary flight mode (right) Depending on the configuration, the height reference is ei- ther defined as height above ground (negative value) or height in the earth-fixed NED frame. The corresponding height estimates are h = −hgnd (see Section 4.2) or, respec- tively, h = pD. The velocity references for the external controllers are calculated as follows: vc x = Cpx( px), vc y = Cpy( py), vc z = Cpz(hf − h), (8) where px and py are components of the position error vector given by px py = cos ψ sin ψ − sin ψ cos ψ pf NE − pNE , (9) where pNE is the estimated position of the helicopter in the earth-fixed NED frame. The yaw angle is fixed and controlled by the external helicopter control system (ψc = ψf ). For yaw angle changes, we use the yaw mode that is described in Section 5.5. 5.4. Climb Mode The climb mode is used for vertical height changes and re- quires a height reference hf . It is a nonstationary flight mode that consists of the four main states introduced in Section 5.1. To reconfigure the control, a reconfigureControl event (see Figure 4) is sent to a concurrent state machine that models the interaction of the different controllers. The horizontal position control and the yaw angle control are identical to the hover mode. The vertical velocity references for the ex- ternal controllers in the acceleration, run, and deceleration states are vc z = a(t − t0), vc z = v, vc z = v − a(t − t0), (10) where a is a fixed vertical acceleration (negative value), v = −verticalSpeed is a fixed vertical speed, and t0 is the time when entering the corresponding state. While being in the state, the velocity references are calculated and passed to the controller. The system transitions from the acceleration to the run state when reaching the desired vertical velocity. When reaching a height that is close to the height reference, the system enters the deceleration state. The distance is mainly determined by the specified vertical deceleration of the he- licopter. The system leaves the deceleration state when the velocity is sufficiently low to enable hover control. If suf- ficiently close to the height reference, the system stabilizes the hover using the desired height as reference for hover control. Otherwise, it uses the current height as reference and sends an error event. If the system receives a hoverMode event while it is in climb mode, the helicopter decelerates and the system transitions to hover mode. 5.5. Yaw Mode The yaw mode is used for changing the yaw angle of the he- licopter to a desired yaw angle ψf while the aircraft hovers. The position control is identical to the hover mode. The yaw angle change is achieved through an increase or decrease of the yaw angle reference ψc for the external yaw controller depending on the direction of rotation. The direction of rota- tion is determined by the smaller angle difference between the start and the desired yaw angle of the two possible rota- tions. The flight mode has nested states similar to the climb Journal of Field Robotics DOI 10.1002/rob
  • 11. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 11 -10 -5 0 5 10 0 5 10 15 20 w ψ Pirouette scan Scanned area Safe obstacle scan range Top view corridorWidth Figure 5. Safe scan area of the waggle cruise flight following a pirouette plotted for the parameters specified in Table VI of the Appendix (angles not to scale). mode. The yaw angle references for the external yaw con- troller in the acceleration, run, and deceleration states are ψc = ±1 2 α(t − t0)2 + ψc 0 , ψc = ±ω(t − t0) + ψc 0 , ψc = ±ω(t − t0) ∓ 1 2 α(t − t0)2 + ψc 0 , (11) where α is a fixed angular acceleration, ω is a fixed angular velocity, t0 is the time, and ψc 0 is the yaw angle reference when entering the corresponding state. 5.6. Pirouette Descent Mode The pirouette descent mode in combination with the state machine described in Section 6.2 enables safe vertical de- scents to desired heights hf close to the ground. The heli- copter descends while rotating about an axis through the position reference point pf NE, which is parallel to the z-axis of the leveled body frame. Thus, the field of view of the LI- DAR system is extended from a planar scan to a cylindrical scan. The pirouette is performed by changing the yaw angle reference with constant rate during the descent: ψc = ωz(t − t0) + ψ0, (12) where ωz = pirouetteYawRate is a fixed yaw rate. t0 is the time and ψ0 is the yaw angle when entering the accelera- tion state. We did not include acceleration and deceleration states for yaw control in this flight mode for two reasons: (1) the external control system includes an angular acceleration limiter and (2) accurate yaw angle control is not required for the pirouette descent. While performing pirouettes, the integral terms of the horizontal position controllers are set to zero. The flight mode has the same nested states as the climb mode and thus we use the same state machine (flightModeA). The yaw control is reconfigured by send- ing the event specialMode before entering the superstate (see Figure 4). The reference velocities are calculated in Eq. (10) with positive values for the vertical acceleration a and ver- tical speed v. 5.7. Waggle Cruise Mode The LAOA system is waypoint-based and uses horizontal straight-line paths for flights to waypoints. If waypoints are at different heights, the height change is achieved through the climb mode as described earlier. The waggle cruise mode combines a straight-line path-following controller with a waggle motion generator for extending the field of view of the LIDAR system. The flight mode requires two references: a 2D waypoint position p fwp NE and a ground track angle ψgf . The waggle motion during forward flight extends the field of view of the LIDAR system from a planar scan to a corridor-shaped scan (see Figures 5 and in Section 4). The flight mode has nested states similar to the climb mode (see Figure 4). The height reference is fixed. Height is esti- mated based on either distance measurements to the ground (Section 4.2) or barometric pressure. The only difference between waggle cruise and a sim- ple cruise is the yaw control. Both flight modes use the same state machine (the simple cruise mode is not shown in Figure 4 as it is not required for the LAOA system). The different configuration of the yaw control is realized by sending the specialMode event before entering the super- state. Journal of Field Robotics DOI 10.1002/rob
  • 12. 12 • Journal of Field Robotics—2013 ψe v v ψg y ψ North c East wp yg xg x p f NE wp f pNE Figure 6. Straight-line path following During simple cruise, the yaw angle reference is fixed. During waggle cruise, the yaw angle reference is given by ψc = ψw sin 2π Tw (t − t0) + ψf g, (13) where ψw=maxWaggleYawAngle, Tw=wagglePeriodTime, and t0 is the time when entering the state. Similar to the pirouette descent, there are no acceleration and deceleration states for the yaw control during the transition from and to hover mode. The maximum yaw rate and yaw acceleration during waggle cruise is determined by ψw and Tw. When choosing the two parameters, the limitations of the helicopter have to be taken into account. The horizontal velocity references vc x and vc y during cruise flight are calculated as follows: vc x = vv x + Cpx (cx ), (14) vc y = vv y + Cpy (cy ), (15) where vv x and vv y are the components of the path velocity vector vv and cx and cy the components of the cross-track error vector c in the leveled body frame (see Figure 6). The two position controllers Cpx and Cpy are given in Eq. (8). The cross-track error vector c in the leveled body frame is given by cx cy = cyg − sin ψe cos ψe , where ψe = ψf g − ψ (16) and cyg is the y-component of the cross-track error vector in the ground track frame given by cyg = − pN sin ψg + pE cos ψg and × pN pE = p fwp NE − pNE. (17) The velocity vector vv in the leveled body frame is given by vv x vv y = vv cos ψe sin ψe , (18) where vv are the desired path speed values that are gen- erated during the acceleration, run, and deceleration states of the nonstationary flight mode. The path speed values are calculated in Eq. (10) with a=horizontalAcceleration and v=waggleCruiseSpeed (fixed values). The velocity references for the external controllers for the experiments described in this paper are given by vc x vc y = cos ψe − sin ψe sin ψe cos ψe vv vyg , (19) where vyg = Cpy(cyg ). In this method, the velocity references are first calculated in the ground track frame and then transformed into the leveled body frame. Contrary to this method, the first method allows us to have different gains for x and y position control. This makes sense if the exter- nal velocity controllers behave differently for forward and sideward flight, which is likely to be the case for single-rotor helicopters. 5.8. Terrain Following The terrain-following behavior emerges from regulating the height above ground during cruise flight. Terrain fol- lowing is activated and deactivated through the events lowAltitudeFlightOn and lowAltitudeFlightOff, which are sent to the height controller while executing the state machine for height changes described in Section 6.2. During terrain following, the height controller uses h = −hgnd estimates from the terrain detection module (see Section 4.2). In the case of detection of a terrain discontinuity caused by a vertical structure or something similar, a special behav- ior is activated: verticalOffset is added to the height observa- tion h if the hmin value is less than discontinuityHeight. The offset is removed after the specified decay time verticalOffset- Decay. When during the decay time another discontinuity is detected, the decay time will start counting again. The maxi- mum vertical velocities commanded by the vertical position controller must be limited to stay within the flight envelope of the helicopter. In particular, commanding a high descent velocity must be avoided as it could cause the helicopter to enter the vortex ring state. In our implementation, the vertical velocities are limited to the verticalSpeed value (see Table VI of the Appendix). 6. LOW-ALTITUDE FLIGHT We define a low-altitude flight as a flight that is performed below typical tree top height in a rural environment. For safe operations close to terrain and obstacles, the helicopter must keep a minimal distance from objects. The minimal distance is mainly limited by the characteristics of the aircraft’s guid- ance & control system and the error of range measurements within the specified environmental conditions. The meth- ods we propose have parameters to adapt the LAOA sys- tem to different helicopter systems and environments. The Journal of Field Robotics DOI 10.1002/rob
  • 13. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 13 Figure 7. State diagram describing start, finish, and abort of a low-altitude flight. It is concurrent with the flight mode switch- ing state machine (Figure 4). All state machines shown in the remainder of this section are encapsulated in state 2. parameters of our implementation are provided in Table VI of the Appendix. The methods described in this section are reactive, and state machines are used to model the behavior of the he- licopter. The events of the state machines are defined in Table VII and the variables used in the state diagrams are calculated according to Table VIII. All state machines mod- eling flights at low altitude are encapsulated in the lowAlti- tudeFlight superstate of the top-level state machine for low- altitude flight as depicted in Figure 7. It is assumed the he- licopter is in hover mode before executing and terminating encapsulated top-level state machines. We developed two obstacle avoidance strategies: a rel- atively simple strategy that is suitable for many rural ar- eas with isolated obstacles such as single trees, and a more complex strategy that reliably guides the helicopter to a goal point in more complex environments. Apart from the assumptions mentioned in the previous sections, we as- sume that the goal point can be reached safely, i.e., there exists a path with sufficient width. If no such path ex- ists, the helicopter tries to reach the goal point until a low fuel warning is sent, which will abort the low-altitude flight. In the following paragraphs, we consider configura- tions of obstacles with gaps smaller than the minimum path width required for the helicopter to pass through as a single obstacle. Both avoidance strategies ensure a safe distance to obstacles. The first strategy was developed primarily for testing the overall system including obstacle detection and flight modes. In both strategies, the helicopter performs waggle cruise flights until it detects obstacles. If an obstacle is detected, the helicopter decelerates and switches to the hover mode. Then it scans the environment for obstacles and calculates and avoidance waypoint. 6.1. Top-level State Machine for Low-altitude Flight The state diagram in Figure 7 describes the start, finish, and abort of a low-altitude flight. When entering the state ma- chine, the system checks if the helicopter is in hover mode. It sends a queryHover event to the mode-switching state machine and waits in state 1 for the hovering event. If the mode-switching state machine does not reply after a speci- fied time ( timeout condition), a transition from state 1 to the final state is made and an error event sent. Otherwise, the system enters the lowAltitudeFlight superstate (state 2). A low-altitude flight is aborted when any of the events causing a transition to state 5 occurs. The events are either sent from a concurrent state machine that monitors the sys- tem or from state machines encapsulated in state 2. If the flight is aborted, the system goes through a deceleration state before performing a vertical climb to safeAltitude in state 4. A safe altitude is a height where it is safe to fly with- out obstacle and terrain detection. In the normal case, the low-altitude flight state 2 is terminated in hover mode and a transition to state 3 is made in which the helicopter climbs directly to a safe altitude. 6.2. Height Change The helicopter must hover at a specified terrain following height (heightRef=cruiseHeight) before any of the two avoid- ance strategies mentioned in the previous paragraph can be applied. Usually, the helicopter is at a different height above ground and sometimes the height above ground is unknown. The method described in this section guides the helicopter safely to the terrain-following height. It may also be applied to change the height after the helicopter has ar- rived at the goal point (heightRef=goalHeight). All height changes are performed following a vertical flight path. The state diagram in Figure 8 describes the height change method. All descents are performed using the pirou- ette descent mode to make sure that the helicopter does not collide with any surrounding obstacles. The first pirouette is flown without height change as initially no assumption about free space is made other than that it is safe to ro- tate. For ascents, the helicopter does not fly pirouettes as we assume there are no overhead obstacles. If the helicopter is beyond the sensing range of the LIDAR system, it will perform a pirouette descent until it reaches a safe sensing range (safeLidarHeight event) to determine the height above ground. When it reaches the safe sensing range, terrain fol- lowing will be enabled and height control switched to LI- DAR readings (lowAltitudeFlightOn). The final pirouette is also flown without height change to ensure there is no ob- stacle in any direction of departure. If during a descent an obstacle is encountered within a specified range, a pirouetteObstacle event is sent from a system monitor. This causes a transition from the Journal of Field Robotics DOI 10.1002/rob
  • 14. 14 • Journal of Field Robotics—2013 Figure 8. State diagram describing the method for vertical height changes at low altitude. lowAltitudeFlight state to the decelerating state in Figure 7 and thus aborts the height-change procedure. 6.3. Obstacle-avoidance Strategy 1 The basic idea of strategy 1 is to combine a motion-to-goal behavior with a motion-to-free-space behavior. The motion- to-goal behavior is the attempted direct flight toward the goal point. The motion-to-free-space behavior is exhibited when attempting to reach free space by first flying toward an avoidance waypoint (avoidanceWp) and then flying toward an assumed free-space waypoint (freeSpaceWp1) in the direc- tion of the goal point. The helicopter is in free space when it reaches the free-space waypoint or when it reaches the start waypoint. The helicopter flies to all waypoints using the waggle cruise mode. The strategy is illustrated further in an example below. The three state diagrams in Figure 9 describe Strategy 1. The left state diagram in Figure 9 contains the states of the top-level state machine. The superstates representing the two main behaviors of the helicopter are states 3 and 4. The transition from the motion-to-goal to motion-to-free- space behavior is made when the helicopter detects a frontal obstacle (farObstacle event). The waggleCruise state machine is used in several state machines. It consists of a state for calculating the ground track angle, a state for initializing the bearingAngle variable needed in both strategies, a state for aligning the helicopter with the ground track, and a state for the actual flight to the specified waypoint. The motion-to-free-space behavior is modeled by the freeSpaceFlight state machine. In strategy 1, the initial avoid- ance direction is predefined (startDirection). However, the direction will be changed after a specified number of un- successful attempts to avoid the obstacle (state 6). When the direction is changed, the helicopter flies to the last start point before flying again toward the goal point. The spec- ified number of attempts constrains the size of an obstacle that can be avoided. An obstacle-avoidance flight using strategy 1 is illus- trated in Figure 10. It shows a flight from a start point defined by the helicopter’s initial position to a goal point Figure 9. State diagrams describing obstacle-avoidance strategy 1. Journal of Field Robotics DOI 10.1002/rob
  • 15. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 15 Start point avoidanceWpDistance avoidanceWp Obstacle Goal point farObstacleDistance freeSpaceWp1 freeSpaceWpDistance avoidanceAngle Figure 10. Illustration of obstacle-avoidance strategy 1 (startDirection=-1). with a predefined avoidance direction to the left. The heli- copter detects an obstacle, flies to the left and toward the obstacle while keeping a safe distance from the obstacle un- til it reaches free space, and eventually it flies to the goal point. Flight test results of strategy 1 including flights be- yond visual range are given in Section 7. The first strategy will not succeed in guiding the heli- copter to a specified goal point in environments that contain larger concave-shaped obstacles. In such environments, the helicopter could get trapped, a phenomenon often observed in reactive systems. The key parameter defining the admis- sible curvature of the obstacle shape is the distance from the point the helicopter detects the first obstacle to the goal point. The required minimum path width is mainly deter- mined by the distance condition of the farObstacle event (see Table VII). 6.4. Obstacle-avoidance Strategy 2 This strategy was developed for rural areas with more complex-shaped and larger obstacles. Although still being reactive and computationally simple, the proposed strat- egy reliably guides the helicopter to a goal point. It suc- ceeds even in environments with concave-shaped obstacles as long as the boundary length of an obstacle is limited as defined below. The algorithm of the strategy is similar to the Bug2 algorithm. Our strategy is designed for the waggle cruise obstacle detection method and considers real-world constraints such as safety distances from obstacles, limited sensing range, limited accelerations, and uncertainties in ob- stacle location, state estimation, and control. Furthermore, it employs heuristics and utilizes assumptions about the environment to be more efficient. The Bug2 algorithm is a greedy algorithm that is complete. The algorithm of strategy 2 is not complete for the general case. However, for certain cases it is equivalent to Bug2 and it successfully terminated in all experiments we have conducted. The state diagrams in Figures 11, 12, and 13 describe strategy 2. The basic idea is the same as in Bug2, i.e., combin- ing a motion-to-goal behavior (state 3 in Figure 11) with a wall6 following behavior (state 14 in Figure 11). The strategy is illustrated further in an example below. The key differ- ences to Bug2 are the directionScan state (state 11 in Figure 11) for deciding in what direction to circumnavigate an obstacle and different conditions when to abort the wall-following behavior. The state diagram and the pseudo code in Figure 12 describe the method for deciding the wall-following direc- tion. The basic idea is to rotate from the left to the right while hovering in front of a detected obstacle and deciding on the direction that offers more free space. The wall-following method utilizes the obstacle- detection methods and flight modes introduced in Sections 4 and 5. The two state diagrams in Figure 13 de- scribe the method. The basic idea is to find a distant point on the boundary of the obstacle in the wall-following direction, rotate the helicopter to a certain angle relative to that point away from the wall, fly a certain distance avoidanceWpDis- tance in that direction, and repeat. To find a distant point on the boundary, the helicopter first rotates toward the wall7 until the wallCatch event is sent and then rotates away from the wall until the wallRelease event is sent. The distant point is the point on the boundary at which the helicopter is point- ing when the wallRelease event is sent. The avoidance behavior of the helicopter may dif- fer depending on its yaw angle before commencing a rotate-toward-wall behavior. Before the helicopter enters the corresponding state (state 10 in Figure 13), it rotates to the startAngle (state 3). The helicopter then either points in the direction of the last ground track (startAngle = bearingAngle) or the bearing to the last detected obsta- cle plus or minus an offset angle depending on the cur- rent wall-following direction (startAngle = obstacleBearing + avDirection · offsetAngle). The latter should prevent the heli- copter from missing an obstacle detected during waggle cruise flight while searching for it while hovering. How- ever, the offset angle method has been developed at a later stage and has not been flight-tested. The key events that make our avoidance strategy differ- ent from Bug2 are closeToGoal, outside, progress, maxAttempts, and continue. All events abort the wall-following (state 14 in Figure 11). The progress and closeToGoal events are similar to the events aborting the wall following in Bug2. Examples of scenarios in which the key events occur can be found in Section 6.5. There are some important differences in the conditions that must hold to produce the progress event (see Table VII): at least one avoidance waypoint must have been generated, it is sufficient to be close to the line to the goal point, and a minimum progress distance is required. The line to the goal point is not, as in Bug2, the line through the start point 6Here, a wall is the boundary of an object in a 2D plane. 7A rotation toward the wall means rotating clockwise if an obstacle is circumnavigated clockwise and rotating anticlockwise otherwise. Journal of Field Robotics DOI 10.1002/rob
  • 16. 16 • Journal of Field Robotics—2013 Figure 11. State diagram describing obstacle-avoidance strategy 2 (top-level state machine). Figure 12. State diagram and pseudocode describing the method for deciding the wall-following direction. Journal of Field Robotics DOI 10.1002/rob
  • 17. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 17 Figure 13. State diagram describing the wall-following method. and the goal point (m-line) but rather the line through the progressWp point and the goal point. The progressWp point is initially the start point but might be changed to a different point during the flight if the helicopter gets closer to the goal point after a specified number of attempts. The minimum progress is evaluated by comparing the distance from the current position to the goal point with the distance from the point at which the wall following started (wallFollowingStart point) to the goal point. Strategy 2 generates a behavior similar to the Bug2 al- gorithm if (1) the geometry of obstacles in an environment is such that during the execution of the state machine, the wall following will only be aborted by the progress or close- ToGoal events; (2) no height changes occur that change the perceived geometry of obstacles; (3) uncertainties in percep- tion and control are neglected. However, our strategy does not check if a goal point is reachable. We assume a path exists. Strategy 2 uses the corridorObstacle event for obstacle detection instead of the farObstacle event used in strategy 1. The corridorObstacle is sent when obstacles are detected inside a corridor of a specified length and width, which is aligned with the flight path (see Figure 5 in Section 5.7). An obstacle-avoidance flight using strategy 2 is illus- trated in Figure 14. The scenario is the same as in Figure 10 for strategy 1: a flight from a start point to a goal point with one obstacle in between. In strategy 2, the helicopter detects the obstacle, performs a scan to decide for a wall-following direction, follows the boundary of the obstacle at a safe dis- tance until it gets close to the original path, aborts the wall following, and eventually flies to the goal point. Flight-test results using strategy 2 are provided in Section 7. 6.5. Obstacle-avoidance Scenarios In this section, we illustrate the behavior of the heli- copter for strategy 2 in special scenarios. All scenarios were Journal of Field Robotics DOI 10.1002/rob
  • 18. 18 • Journal of Field Robotics—2013 Start point avoidanceAngle Obstacle wallReleaseDistance avoidanceWpDistance Goal point corridorWidth avoidanceWp corridorLength progressPathTolerance Direction scan Figure 14. Illustration of obstacle-avoidance strategy 2. encountered during flight tests. The scenarios also demon- strate the application of the events aborting the wall- following behavior. Figure 15 depicts nine different scenar- ios. In the drawings, a line with an arrow represents the approximate path of the helicopter while exhibiting either a motion-to-goal or a wall-following behavior. When the helicopter changes its behavior, a new line is drawn. If the arrow of a line is not filled, it means the complete path is not shown. Figure 15(a) illustrates a typical cul-de-sac scenario. This is an example in which strategy 1 would fail. When strategy 2 is applied, the helicopter first flies toward the goal point, then follows the wall until the progress event is sent (i.e., it gets close to the line from the start to the goal point). Finally, it flies to the goal point. Figure 15(b) demonstrates the application of the closeTo- Goal event. The event is sent at point A. The event is similar to the event in Bug2 when a goal point is reached during wall following. However, the closeToGoal event allows for uncertainty in helicopter position and it makes the strat- egy more efficient. Without the event, the helicopter would continue following the wall and fly past the goal point. Figure 15(c) illustrates the situation if the helicopter follows a wall until the maxAttempts event is sent at point A. An evaluation of the situation shows that progress has been made during the wall following (wallFollowingProgress event is sent as the right dotted line is shorter than the left dotted line), hence the helicopter continues following the wall. Figure 15(d) illustrates the situation if the helicopter follows a wall until the maxAttempts event is sent at point A and no progress has been made during wall following. The helicopter then continues flying toward the goal point. When detecting an obstacle at point B, it follows the wall in the other direction. Figure 15(e) shows the behavior of the helicopter when during wall following the contact to the wall is temporar- ily lost. This usually happens at sharp convex corners of an obstacle. Losing the contact to the wall means the system en- ters state 5 in Figure 13. In Figure 15(e), the helicopter loses contact at point A, but when flying toward freeSpaceWp2, it detects the wall again at point B and continues following the wall. Figure 15(f) shows the behavior of the helicopter when during wall following the contact to the wall is lost and not regained. In this scenario, the contact is lost at point A and the helicopter reaches freeSpaceWp2 at point B and the continue event is sent. The wall following is aborted and the helicopter flies to the goal point. Figure 15(g) illustrates the case of a false-positive detec- tion at point A. The system fails to detect the obstacle during the rotation in state 10 in Figure 13 and transitions to the final state instead of state 5 as the wall-following behavior has not been exhibited yet. Leaving the wall-following state means that the helicopter continues flying to the goal point. Figure 15 (h) shows the behavior of the helicopter when reaching the border of the search area. The search area in which the helicopter can operate while trying to reach the goal point is defined as a corridor along the line from the start point to the goal point. When crossing the border at point A, the outside event is sent. The two dashed lines near point A illustrate the hysteresis condition, which is neces- sary to prevent repeatedly sending the outside event. When crossing the border, the helicopter stops, turns around, and follows the wall in the other direction. At point B, the progress event is not sent, as the distance from the point at which the helicopter first detected the obstacle to the goal point is identical to the distance from point B to the goal point and a minimum difference is required to cause the event. Figure 15(i) illustrates the case in which the helicopter follows a wall with a gap that is only detected from one side while circumnavigating an obstacle and the helicopter finds a path through the gap. In this situation, it could happen Journal of Field Robotics DOI 10.1002/rob
  • 19. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 19 (e) Lost wall contact − obstacle A B A A A A (g) False positive (h) Outside search area A (a) Cul−de−sac (b) Close to goal (c) Wall following progress (d) No wall following progress B B A B (i) Inconsistent gap sensing A (f) Lost wall contact − no obstacle Figure 15. Obstacle avoidance scenarios. that the helicopter would not stop encircling the obstacle as the progress event is never sent. However, the maxAttempts event stops the behavior at point A and the helicopter flies to the goal point. 6.6. Approaching Vertical Structures For many inspection tasks it is required to fly toward an approximately vertical structure and stop at a close but safe distance to collect frontal images. This can be easily achieved with components of the LAOA system. We used the following method for the structure inspection flights described in Section 7: The operator defines a target point and an approach point in the earth-fixed NED frame. The target point is a point of the structure. The line between the approach point and the target point defines the ap- proach path and thus the direction from which the struc- ture should be inspected. Given that the helicopter hovers at the approach point, it is then commanded to fly to the target point using the waggleCruise state machine (Figure 9). The hoverMode event must be sent once the helicopter reaches a desired distance to the structure based on the df value (Section 4.1). The farObstacle event can be used for the farObstacleDistance condition, or the corridorObstacle event for the corridorLength condition (see Table VII). After send- ing the hoverMode event, the helicopter will decelerate and hover. We assume there is no obstacle between the approach point and the target point except for the structure itself. The approach can be performed at any specified height. Height changes are possible either at the approach point or at the point the helicopter stops (inspection point) using the height-change method described in Section 6.2. However, it should be taken into account that typically for descents more free space is required than for ascents because of a higher control error of the pirouette descent flight mode. Therefore, it is often necessary to descend at the approach point. If there is not sufficient space either, a descent must be conducted at a third point (descent point). The helicopter can then be flown to the approach point using strategy 1 or 2 as described in Sections 6.3 and 6.4. Once the helicopter hovers at the inspection point, it starts capturing images. At Journal of Field Robotics DOI 10.1002/rob
  • 20. 20 • Journal of Field Robotics—2013 the same time, it yaws to the left and right to increase the field of view of the inspection camera. 7. EXPERIMENTAL RESULTS In this section, we present experimental results of the LAOA system implemented on one of CSIRO’s unmanned heli- copters (see Figure 1 in the introduction). Technical spec- ifications of the base system components can be found in Table V of the appendix. All experiments were conducted in unstructured outdoor environments without the use of maps. Flights were conducted at two sites: an industrial site in a natural bushland setting in Brisbane and a farm in rural Queensland, Australia. The first site is the QCAT flight test area. It is about 180 m × 130 m in size and contains several natural and man-made obstacles such as trees of various sizes, bushes, a microwave tower, fences, two sheds, two vehicles, and other small obstacles. The second site is the Burrandowan flight-test area. It is a typical rural environ- ment of more than 200 ha and includes areas with rough terrain and varying slope. All flights were conducted in compliance with the Australian regulations for unmanned aircraft systems. We tested our LIDAR system in both environments under different ambient light conditions. The two critical parameters of the LIDAR system safeLidarHeight and safeOb- stacleScanRange are provided in Table VI of the Appendix. All obstacle avoidance flights were conducted 10 m above ground with terrain following enabled. In most ex- periments we flew the helicopter manually to a descent point. After switching to autonomous flight, it was com- manded to descend to terrain-following height using the height-change method described in Section 6.2 and then to conduct the experiment as specified in a state machine. In more complex missions, such as the one described in Section 7.6, the low-altitude flight was part of a flight plan with several predefined waypoints that was executed by a waypoint sequencer. The two most important experimental results are the system’s performance with regard to safe and reliable au- tonomy. Safe autonomy is the ability of the LAOA system to perform a low-altitude flight without human interaction within its specified limits without causing damage to the environment or the helicopter. Safe autonomy does not im- ply that all specified goal points are reached. The system may abort a low-altitude flight for safety reasons. Reliable autonomy is the ability of the system to reach specified goal points. Table I shows that despite performing a significant number of runs in different scenarios, we did not encounter a failure. This demonstrates the safety of the system. The table includes flights conducted during and after devel- opment of the system. The table does not contain flights Table I. Safe autonomy of the LAOA system. Method Runs Scenarios Flight time (h) Failuresa Terrain following 73b 11 11.4 0 Avoidance strategy 1 27c 7 2.8 0 Avoidance strategy 2 23c 6 7.1 0 aA failure is when damage occurs during a run or a backup pilot has to take over to prevent damage. bThe helicopter followed more than 14 km of terrain at 10 m height above ground. cThe helicopter encounters at least one obstacle during waggle cruise flight with terrain following from a start point to a goal point. Table II. Reliable autonomy of the LAOA system. Method Missions Scenarios Successesa Avoidance strategy 1 20 7 20 Avoidance strategy 2 17 6 17 a A mission is successful if the helicopter autonomously reaches all specified low-altitude goal points. where assumptions of the task specification were violated. The task specification of our system includes the following critical assumptions (see Tables V and VI of the Appendix): the helicopter navigation system operates according to its specification, and the horizontal and vertical wind speed is within the specified limits. Failures that occurred because of a violation of a critical assumption are described further below. Table II contains results of the deployment of the sys- tem in several missions8 using the two obstacle-avoidance strategies described in Section 6. All missions were suc- cessfully executed. It should be mentioned that we con- ducted many more experiments during the development of the avoidance strategies and that there were cases in which a specified goal point was not reached. These cases were thor- oughly analyzed and the system was modified accordingly. The missions included in Table II, however, were executed after the development was completed. During the development of the LAOA system, we had three cases in which a backup pilot had to take over control. In the first case, the GPS of the navigation system failed possibly due to radio interference with a microwave tower. In the second case, the control system could not cope with a strong wind gust while the helicopter was close to a tree. In the last case, the helicopter was pushed toward the ground 8The difference between a run and a mission is that a run is a seg- ment of a flight related to a specific experiment whereas a mission is a flight to accomplish a specified task. A mission may include several runs. Journal of Field Robotics DOI 10.1002/rob
  • 21. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 21 Table III. Empirical control errorsa of flight modes during low- altitude flight.b Mode h95 (m) p95 (m) ψ95 (deg) v95/ME (m/s) hover 0.8 1.8 9 waggle cruise 0.8 1.6c 0.7/0.1 pirouette descent 4.5 0.4/0.0 climb 2.4 0.3/0.0 yaw 1.0 2.9 ax95 = 95th percentile of absolute error, xME = mean error, h = height error, p = horizontal position error, ψ = yaw error, v = horizontal/vertical velocity error. b10 m height above ground; 4–7 m/s wind speed at ground station location. ccross-track error. by a strong downdraft. In all three cases, a critical design assumption was violated. The risk of failure during a BVR flight was decreased by increasing the safety distance to obstacles and by avoiding flights in potentially bad weather and areas with GPS problems. 7.1. Control Errors The parameters of the terrain following and avoidance func- tions listed in Table VI of the Appendix depend, among others, on the control errors. Table III shows control errors for the different flight modes used in the LAOA system. The errors were estimated empirically from flight data of the CSIRO helicopter. The errors depend much on the un- derlying external control system. When estimating control errors empirically, it is important to collect flight data in representative environmental conditions. Figure 16 shows the tracking performance for the yaw angle during pirouette descent and waggle cruise flight. Accurate yaw angle tracking is not important for obsta- cle detection as long as the specified scan area is covered. We conducted some longer pirouette descents to see if we would encounter problems because of the sustained rota- tion of the helicopter. We tested pirouette descents with a height change of more than 30 m without noticing any problems. 7.2. Terrain Following Terrain following is a key functionality of the LAOA system. Hence, we conducted many experiments to investigate if the simple method we propose is adequate. The flights were conducted with and without waggle motion. We did not notice a deterioration in performance with waggle motion. Terrain following was flight-tested for cruise speeds up to 2 m/s and down to 5 m height above ground in hilly terrain. However, due to the frontal sensing range limitation of our LIDAR system, we limited the cruise speed for obstacle- avoidance flights to 1 m/s. The height error in Table III for the waggle cruise mode with terrain following is estimated from flights with 1 m/s cruise speed. Figure 17 shows the path of two terrain-following flights conducted at the QCAT site: a flight around the com- pound (descent point to end point) without waggle motion with 2 m/s cruise speed at 5 m height above ground and a flight across the compound (point 1 to point 2) with waggle motion at 1 m/s cruise speed at 10 m height above ground. In the first flight, the helicopter was commanded to fly to a series of waypoints, defining an obstacle-free flight path of about 450 m around the compound. The corresponding altitude plot is shown in Figure 18. The plot contains three different heights: The height above takeoff point (−pD) is estimated based on barometric pressure. The height above ground is the hgnd value (it does not show terrain disconti- nuities as it is the intercept of the line fit described in Sec- tion 4.2). The terrain height is estimated by subtracting the height above ground from the height above takeoff point. The helicopter was manually flown to the descent point at a height of approximately 22 m above takeoff point. At that height, no LIDAR-based height estimates were available. After switching to autonomous flight, the height-change state machine (Figure 8) was used to perform a descent to the terrain-following height. The height control was changed from pressure-based height to LIDAR-based height and the terrain following was enabled when the lowAltitudeFlightOn event was sent. For the waypoint flight, we used a waypoint sequencer that utilized the simple cruise mode (see Section 5.7). Dur- ing 370 s of terrain following, the helicopter maintained the specified clearance from the ground with approximately 1 m error in height regulation. Figure 19 shows the slope Figure 16. Yaw angle tracking during pirouette descent (left) and waggle cruise flight (right). Journal of Field Robotics DOI 10.1002/rob
  • 22. 22 • Journal of Field Robotics—2013 10m Point 2 Descent point A Point 1 End point Figure 17. Two terrain-following flights at the QCAT site:9 flight around compound (descent point to end point) and flight across compound (point 1 to point 2). At point A, the LAOA system detected a terrain discontinuity. Figure 18. Altitude plot of the terrain-following flight around the compound. Figure 19. Estimated terrain slope angle during the terrain- following flight around the compound. angle computed by the terrain-detection system. Slope in- formation has not been used in the current implementation but might be useful for flights at higher speed. The performance of the height-offset method described in Section 5.8 is demonstrated with the terrain-following flight across the compound. Figure 17 only shows the part 9Aerial imagery of the QCAT site copyright by NearMap Pty Ltd. and used with permission. Figure 20. Altitude plot of the terrain-following flight across the compound. of the flight related to the crossing of the compound. The he- licopter hovered at low altitude at the start of the run (point 1) and the end of the run (point 2). During the flight, many terrain discontinuities, such as the fence and the roof, were encountered. The altitude plot in Figure 20 shows when the height offset was added to keep a safe distance to terrain obstacles. The first discontinuity was detected at time A in the altitude plot or point A in the map just before the fence of the compound. The offset was removed at time B in the altitude plot, which corresponds to point 2 in the map. Point 2 was above the fence on the other side of the compound. We let the helicopter hover for an extended time at point 2 to demonstrate the behavior of the helicopter descending to the terrain-following height, detecting the terrain disconti- nuity (fence), and again applying the height offset at time C. During the whole flight, the helicopter kept a safe dis- tance to terrain obstacles and the system did not abort the low-altitude flight. In a few cases of testing the LAOA system, the system aborted a low-altitude flight and climbed to a safe altitude because the closeObstacle event was sent (see Figure 7). It mostly occurred while flying over complex terrain obsta- cles such as low roofs, fences, and antenna poles inside the compound at the QCAT site. However, in most cases the height offset method prevented the helicopter from getting too close to terrain obstacles and from aborting the low- altitude flight. 7.3. Wall Following The wall-following behavior is essential for avoidance strat- egy 2. To demonstrate the performance of the wall-following method described in Figure 13, we present a flight that was conducted during the development of avoidance strategy 2. The flight path is shown in Figure 21. The helicopter was commanded to fly to waypoints 1 and 2 at the QCAT site. While trying to reach the waypoints, the helicopter exhib- ited a long wall-following behavior along the boundary of a forest and groups of trees without sufficient clearance to fly in between. Again the height-change state machine was used to descend to terrain-following height. After detecting the first obstacle, the helicopter stayed in the wall-following Journal of Field Robotics DOI 10.1002/rob
  • 23. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 23 Figure 21. Wall following flight at the QCAT site. state 14 of the state machine of strategy 2 (Figure 11) for most of the flight. Apart from the flight path, Figure 21 also shows the obstacle points that correspond to the distances df of the closest frontal obstacles (Section 4.1). As can be seen in the figure, the helicopter keeps a safe distance from ob- stacles during wall following. With the parameters used in our experiments (see Table VI of the Appendix), the flight time per generated avoidance waypoint is approximately 1 min and the distance between two avoidance waypoints is approximately 12 m. 7.4. Avoidance Strategy 1 Avoidance strategy 1 is quick to implement and useful for testing obstacle detection and flight modes. However, strategy 2 outperforms strategy 1 in finding a path in com- plex scenarios. Hence, we will only present results of two obstacle-avoidance flights using strategy 1. The flights are interesting as they were part of two complex infrastructure inspection missions that were executed beyond visual range without a backup pilot at the Burrandowan site. The com- plete missions are described in Section 7.6; here we focus on the obstacle-avoidance part. The paths of the two obstacle- avoidance flights are shown in the top left corners of Figures 23(a) and 23(b). In both missions, the helicopter performed a pirouette descent at the descent point (start point) and was com- manded to fly to an approach point (goal point). Between the two points was a big tree. In the first mission, the pre- defined avoidance direction was to the right, in the second mission it was to the left. During the first low-altitude flight, the LAOA system generated five waypoints to avoid the big tree. During the second low-altitude flight, more waypoints were generated as the initial avoidance direction was to the left and there was no safe path to circumnavigate the tree clockwise. The helicopter returned to the descent point af- ter several attempts, changed the avoidance direction, and succeeded to circumnavigate the tree anticlockwise as it did in the first flight. The implementation of strategy 1 used for the two flights was slightly different from the final version: the he- licopter did not stop at freeSpaceWp1 points and there was an implementation fault in the processing of short LIDAR range measurements when the helicopter was in obstacle- free space. The latter caused the generation of one unneces- sary avoidance waypoint during the first flight before pro- ceeding to the approach point [Figure 23(a)]. The problem has since been fixed. The switching to the hover mode at freeSpaceWp1 points was introduced after simplifying a state machine in the current implementation. 7.5. Avoidance Strategy 2 We put significant effort into the development and testing of avoidance strategy 2. Since completing the development, we have not encountered a situation in which the LAOA system has not reached a goal point. In the following, we present six missions executed after completing the development. The flight paths are shown in Figure 22. A typical avoidance flight with a single obstacle in the flight path is shown in Figure 22(a). When detecting the tree between the start and the goal point, the LAOA system decided to fly to the left using the method described in Figure 12 in Section 6. From the helicopter’s perspective, the gap between the compound on the right and the tree was not big enough and it detected more free space on the left. The system circumnavigated the tree clockwise by generating five avoidance waypoints before the wall following was aborted by the closeToGoal event. Figure 22(b) shows a simple inspection mission. The task was to take aerial photos of a ground object with a digi- tal camera mounted underneath the helicopter. The ground object was defined by its position in the earth-fixed NED frame (inspection point). The mission was started at high al- titude and included a pirouette descent at the descent point. The helicopter then had to fly from the descent point (start point) to the inspection point (first goal point) and return to the descent point (second goal point). As in the previous scenario, the helicopter circumnavigated the tree clockwise, however on the return flight it found a path between the sheds in the compound and the tree. The wall following on the return flight was aborted by the progress event. The height variation of the obstacles inside the compound was Journal of Field Robotics DOI 10.1002/rob
  • 24. 24 • Journal of Field Robotics—2013 Figure 22. Six LAOA missions flown within visual range at the QCAT site using strategy 2: (a) tree avoidance, (b) inspection of a ground object, (c) inspection of microwave tower, (d) inspection of microwave tower with different start point, (e) inspection of several ground objects, (f) inspection with unreachable inspection point. Journal of Field Robotics DOI 10.1002/rob
  • 25. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 25 large. It ranged from the height of a fence to the height of a microwave tower. Depending on from where the helicopter approached the compound, the free space was perceived differently and sometimes obstacles were perceived as ter- rain (terrain obstacles). The return flight also demonstrates the interaction of the horizontal obstacle avoidance and the terrain-following behavior. When approaching the narrow gap between the tree and the compound, the helicopter climbed when detecting the fence and then had enough free space to fly to the left to avoid the tree. Figure 22(c) shows another inspection mission. This time the task was to take frontal photos of a microwave tower [Figure 26(b)] at close range using the approach method described in Section 6.6. For this task, a digital camera was mounted at the front of the helicopter (see Figure 1). The start point was again the descent point and the goal point was the approach point. The location of the mi- crowave tower was defined by the target point. The point where the helicopter stopped to take photos was the in- spection point. In this mission, the helicopter successfully avoided several trees and parts of the compound. We conducted the tower inspection several times with different descent points. A second example is shown in Fig- ure 22(d). In this mission, we started west of the previous descent point. As there was not enough space for a pirouette descent and knowing there was enough space for a simple descent (no rotation), we used the simple descent to enable the terrain following. This time the helicopter avoided the tree and the compound to the right to get to the approach point. Figure 22(e) shows the flight path of the most com- plex low-altitude flight we have conducted. The task was to take aerial photos of five ground objects. The mission started again with a pirouette descent. On its way to the first inspection point, the helicopter had to avoid two trees. The second inspection point could be reached directly. Al- though the helicopter stopped on the way to the third in- spection point as it detected a tree in the west, it then found enough space to continue the flight to the inspec- tion point directly. This demonstrates how uncertainties can change a behavior. The fourth inspection point could be reached by flying north of the tree that was on the di- rect path. To reach the last inspection point, the helicopter deviated far from the direct path as it did not find a path east of the compound. It flew between the tree north of the compound and the sheds in the compound and fol- lowed the boundary of the tree clockwise. The clockwise wall-following behavior around the tree was aborted by the outside event and the wall-following direction changed. The helicopter followed the tree back to the compound and found a path to the fifth inspection point west of the compound. The mission depicted in Figure 22(f) demonstrates what happens if the LAOA system is commanded to fly to a goal point that is unreachable. In this mission, the third inspec- tion point was behind two trees and the gap between the two was too narrow for the helicopter to safely fly through. The helicopter successfully reached the first two inspection points, however it exhibited an oscillatory behavior in the region of the third inspection point. It tried to fly in be- tween the trees to reach the point, but it always detected a tree inside the scan corridor and continued with a wall- following behavior along one of the trees. It changed the wall-following direction when either an outside or noWall- FollowingProgress event was sent. We aborted the flight after a couple of iterations. In a mission without a backup pilot, the lowFuel event would have aborted the low-altitude flight (Figure 7). During the development of the avoidance strategies, we noticed that reactive behaviors exhibited in real-world scenarios were often different from what we expected. Reac- tive methods that were designed based on idealized models of the world and tested in simulation often did not perform satisfactorily in our real-world experiments and had to be modified accordingly. 7.6. Missions Beyond Visual Range The LAOA system was demonstrated during the final flight trials of the Smart Skies Project, which took place at the Burrandowan site. The Smart Skies Project was a research program exploring the research and development of fu- ture technologies that support the efficient utilization of airspace by both manned and unmanned aircraft Clothier et al. (2011). In the framework of the project, the CSIRO UAS team was engaged in two main areas of research: de- pendable autonomous flight of unmanned helicopters in controlled airspace and dependable autonomous flight of unmanned helicopters at low altitude over unknown ter- rain with static obstacles. A high level of dependability was required to permit flights beyond visual range without a backup pilot. The new technology was envisaged to en- able autonomous infrastructure inspection missions with unmanned helicopters. The objective of the final flight trials was to demon- strate the integration of all components that have been developed in the project. The flight trials involved sev- eral aircraft: (1) the CSIRO helicopter described in this paper, (2) an autonomous unmanned fixed-wing aircraft, (3) a manned light aircraft equipped with an autopilot for semiautonomous flights, and (4) a number of simulated manned and unmanned aircraft. The task for the CSIRO helicopter was to take frontal photos of a windmill on the Burrandowan farm for inspection. The flights to and from the inspection area were conducted in airspace shared with the other aircraft. The airspace was controlled by a cen- tralized automated airspace control system (ADAC) devel- oped by our project partner Boeing Research and Technology USA. Journal of Field Robotics DOI 10.1002/rob
  • 26. 26 • Journal of Field Robotics—2013 10m 100m ADAC flight plan Detected obstacle points Target point Mission end point (windmill) Inspection point BVR transition point Ground station Descent point Mission start point Original flight plan Descent point Original flight path Approach point Inspection area Target point a 10m 100m Detected obstacle points Target point (windmill) Inspection point BVR transition point Ground station Inspection area Target point Descent point Mission start point Original flight path Descent point Mission end point Approach point b Figure 23. The two windmill inspection missions flown beyond visual range at the Burrandowan site (satellite imagery of the Burrandowan site copyright by Google Inc.) using strategy 1: (a) initial avoidance direction to the right, (b) initial avoidance direction to the left. Figure 24. Altitude plot of the windmill inspection flight shown in Figure 23(b). Figure 25. The CSIRO helicopter approaching the windmill at the Burrandowan site. The helicopter successfully avoids the big tree in the left image, which is in between the descent point and the windmill (see also Figure 23). We executed the inspection mission twice, as shown in Figures 23(a) and 23(b). The windmill (target point) was located at about 1.4 km from the takeoff point. The inspec- tion missions started at the mission start point and finished at the mission end point. The missions were executed be- yond visual range of the operator at the ground station location and without a backup pilot. The operator could terminate a flight but there were no means to fly the aircraft through the ground station. The flights to and from the in- spection area [zoomed-in area in Figures 23(a) and 23(b)] were conducted at high altitude without static obstacles fol- lowing a predefined flight plan. Flights at high altitude were Journal of Field Robotics DOI 10.1002/rob
  • 27. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 27 Figure 26. Inspection photos taken by the helicopter from approximately 10 m: (a) windmill at the Burrandowan site; (b) microwave tower at the QCAT site. conducted with 5 m/s ground speed and without wag- gle motion. Other aircraft were avoided by following in- structions the helicopter received during the flight from the airspace control system. Once the helicopter descended into the inspection area at the descent point, flights were con- ducted at low altitude without other air traffic. In the first mission, the helicopter flew to the approach point at low altitude as described in Section 7.4 and ap- proached the windmill using the method explained in Sec- tion 6.6. At the inspection point, it took photos of the wind- mill and climbed to a safe altitude. During the return flight in controlled airspace, the helicopter was instructed by the automated airspace control system to deviate from the orig- inal flight plan to the west to avoid an aircraft (ADAC flight plan). In the second mission,10 no flight plan changes were required during the flights to and from the inspection area. The low-altitude flight, however, was more challeng- ing than in the first mission as the predefined avoidance direction was to the left. This made the helicopter initially fly into an area without a safe path to the windmill, as de- scribed in Section 7.4. The altitude plot of the second flight is shown in Figure 24. The different heights were already explained in Section 7.2. The plot depicts the three stages of the inspection mission: the flight to the inspection area, the low-altitude flight, and the return flight. The heights in stages 1 and 3 were predefined in the flight plan relative to the takeoff point. The two sudden height changes during the low-altitude flight were caused by terrain discontinu- ities (Section 5.8). Figure 25 contains photos of the helicopter taken by an observer while approaching the windmill. The observer 10A video showing the arrival of the helicopter in the inspection area, the pirouette descent, waggle cruise flight, the approach of the windmill, and the departure is available at http://www.cat.csiro .au/ict/download/SmartSkies/bvr_inspection_flight_short.mov was in radio contact with the operator at the ground station. To meet regulatory requirements, the helicopter was also equipped with a flight-termination system that prevented the aircraft from leaving a specified mission area in case of a fatal system failure. Flight termination could have been initiated by a system monitor or the operator. The design of the flight-termination system is beyond the scope of this paper. One of the inspection photos of the windmill taken by the helicopter is shown in Figure 26(a). Figure 26(b) contains an inspection photo taken during the microwave tower inspection flights described in Section 7.5. Both pho- tos were taken at an approximate distance of 10 m to the inspection object. 8. CONCLUSION We presented a dependable autonomous system and the un- derlying methods enabling goal-oriented flights of robotic helicopters at low altitude over unknown terrain with static obstacles. The proposed LAOA system includes a novel LIDAR-based terrain- and obstacle-detection method and a novel reactive behavior-based avoidance strategy. We pro- vided a detailed description of the system facilitating an implementation for a small unmanned helicopter. All meth- ods were extensively flight-tested in representative outdoor environments. The focus of our work has been on de- pendability, computational efficiency, and easy implemen- tation using off-the-shelf hardware components. We chose a component-based design approach including extended- state machines for modeling robotic behavior. The state machine-based design simplified in particular the develop- ment and analysis of different avoidance strategies. More- over, the state machine-based design makes it easy to extend the proposed strategies. We have shown that obstacles can be reliably de- tected by analyzing readings of a 2D LIDAR system while Journal of Field Robotics DOI 10.1002/rob
  • 28. 28 • Journal of Field Robotics—2013 flying pirouettes during vertical descents and waggling the helicopter in yaw during forward flight. We have also shown that it is feasible to use a reactive behavior-based ap- proach for goal-oriented obstacle avoidance. We decoupled the terrain-avoidance problem from the frontal obstacle- avoidance problem by combining a terrain-following ap- proach with a reactive navigation avoidance strategy for a 2D environment. We put significant effort into the de- velopment of an avoidance strategy that considers real- world constraints and that is optimized for robotic heli- copters operating in rural areas. The LAOA system can also safely reach locations in more confined spaces as it is not much limited by dynamic constraints and is ca- pable of detecting obstacles in front of and below the helicopter. The system and methods were thoroughly evaluated during many low-altitude flights in unstructured, outdoor environments without the use of maps. The helicopter per- formed many close-range inspection flights. Among others, it flew multiple times to a microwave tower at the QCAT site and a windmill at the Burrandowan site. Two windmill in- spection flights were conducted beyond visual range with- out a backup pilot. In total, the helicopter followed more than 14 km of terrain at 10 m above ground, avoided 50 ob- stacles with no failure, and succeeded in reaching 37 reach- able locations. The total flight time was more than 11 h. The experimental results demonstrate the safety and reliabil- ity of the proposed system allowing low-risk autonomous flight. However, our approach has some limitations. It is less suitable for environments with complex obstacle config- urations that are typically found in urban areas. Here, a mapping-based approach is likely to be more efficient. In environments for which the system was designed, however, mapping is often not beneficial as locations are rarely revis- ited and the sensing range of LIDAR systems is typically short compared to the mission area. Another limitation of our approach is that obstacles are predominantly avoided from the side as height changes are only performed through terrain following during low-altitude flight. Locations that are surrounded by obstacles that cannot be avoided through height change are unreachable. Furthermore, steep terrain is recognized as a frontal obstacle and the system tries to avoid it from the side. This may result in inefficient behavior. Fi- nally, we have not fully investigated what kind of obstacles our system cannot detect. It mainly depends on the per- formance of the 2D LIDAR system and the chosen system parameters. What we can say is that the implemented sys- tem detected all obstacles in the experiments we conducted in relevant environments. Most obstacles were larger ob- jects such as trees and sheds, but the system also detected smaller objects such as fences and antenna poles. Our sys- tem has not been specifically configured for the detection of small or thin obstacles such as power lines. Future work could address these limitations. In addi- tion to improving the reactive system by including the not yet flight-tested offset angle method and additional behav- iors for height changes, adding a mapping component to the system would significantly improve its efficiency. Hav- ing a LIDAR system with longer range and a more accurate navigation and control system would increase cruise speed and enable flights closer to obstacles. Furthermore, a suit- able11 3D LIDAR system would make the pirouette descent and waggle cruise flight modes obsolete. The LAOA sys- tem has not yet been validated on helicopters of a different class. A procedure for determining system parameters from characteristics of the aircraft, the sensor and control system, and the environment would facilitate the adaption of the system. ACKNOWLEDGMENTS This research was part of the Smart Skies Project and was supported, in part, by the Queensland State Government Smart State Funding Scheme. The authors gratefully ac- knowledge the contribution of the Smart Skies partners Boeing Research & Technology (BR&T), Boeing Research & Technology Australia (BR&TA), Queensland University of Technology (QUT), and all CSIRO UAS team members. In particular, we would like to thank Bilal Arain and Lennon Cork for the work on the control system of the CSIRO unmanned helicopter and Brett Wood for the technical support. APPENDIX Tables IV–IX present additional information. 11According to our requirements, a suitable 3 D LIDAR system is dependable, lightweight, low power, easy to install, cost effective, and has a comparable resolution and field of view (when inte- grated). Journal of Field Robotics DOI 10.1002/rob
  • 29. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 29 Table IV. Nomenclature. Variable name Symbol Descriptiona dd vertical distance to obstacle below helicopter df horizontal distance to obstacle in front of helicopter do distance from helicopter to line through start point and goal point dp distance from helicopter to line through progressWp point and goal point h height observation for control heightRef hf fixed height reference hgnd helicopter height above ground hmin helicopter height above highest point below helicopter θ, φ helicopter pitch and roll angles pD helicopter height in the earth-fixed NED frame goalPosition p goal NE goal point positionRef pf NE fixed position reference wp p fwp NE fixed target waypoint for waggle cruise mode pNE helicopter position (ri, λi) LIDAR readings in polar coordinates vNED helicopter velocity vector vc xyz velocity references for external controllers ψc yaw angle reference for external controller ψf fixed yaw angle reference groundTrackAngleRef ψf g fixed ground track angle reference helicopterYawAngle ψ true heading of the helicopter aTrue north is used for the earth-fixed NED frame. All points pNE = (pN, pE) are described by their horizontal position in the earth-fixed NED frame. For simplicity reasons, we do not distinguish between points and their coordinate vectors. Table V. Technical specifications of base system components. Helicopter Vario Benzin Trainer (modified) 12.3 kg maximum takeoff weight 1.78 m rotor diameter 23 cm3 two-stroke gasoline engine 60 min endurance Avionics L1 C/A GPS receiver (2.5 m CEP), MEMS-based AHRS, high-resolution barometric pressure sensor Hokuyo UTM-30LX 2D LIDAR system (270◦ field of view, 30 m detection range, 25 ms scan time) Vortex86DX 800 MHz navigation computer producing helicopter state estimates at 100 Hz Via Mark 800 MHz flight computer running the external velocity and yaw control at 100 Hz Journal of Field Robotics DOI 10.1002/rob
  • 30. 30 • Journal of Field Robotics—2013 Table VI. Parameters of the LAOA system used in the flight tests. Strategy 1 Strategy 2 avoidanceAngle 90◦ 60◦ avoidanceWpDistance 12 m 12 m clearanceAngle – 60◦ closeObstacleDistance 5 m 5 m closeToGoalDistance – 20 m corridorLength – 12 m corridorWidth – 20 m cruiseHeight 10 m 10 m detectionWindowAngle 90◦ 90◦ detectionWindowSize 10 m 10 m discontinuityHeight 8 m 8 m farObstacleDistance 15 m 15 m freeSpaceDistance – 20 m freeSpaceWpDistance – 12 m heightClearance (7 m) (7 m) maxAttempts 4 10 maxHorizontalWind 10 m/s 10 m/s maxPathDistance – 50 m maxVerticalWind 2 m/s 2 m/s maxWallAngle – 120◦ maxWaggleYawAngle 45◦ 45◦ minimalLidarRange 1.5 m 1.5 m minPathProgressDistance – 20 m minWallFollowingProgressDistance – 10 m offsetAngle – (20◦ ) outsideHysteresis – 20 m pirouetteYawRate 45◦ /s 45◦ /s progressPathTolerance – 5 m safeAltitude 55 m 55 m safeLidarHeight 15 m 15 m safeObstacleScanRange 14 m 14 m scanAngle – 120◦ terrainPointVariation 1 m 1 m verticalOffset 3 m 3 m verticalOffsetDecay 16 s 16 s verticalSpeed 1 m/s 1 m/s waggleCruiseSpeed 1 m/s 1 m/s wagglePeriodTime 4 s 4 s wallCatchDistance – 15 m wallReleaseDistance – 15 m Journal of Field Robotics DOI 10.1002/rob
  • 31. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 31 Table VII. Condition-based eventsa used in state diagrams. Event (↔ opposite event) Condition clearOfWall wrap(ψ − bearingAngle) ≥ avDirection · clearanceAngle where wrap(α) = atan2(sin α, cos α) climb (↔descend) hgnd ≤ heightRef closeObstacle [(df < closeObstacleDistance) ∧ (df = 0)] ∨ [(hmin < closeObstacleDistance) ∧ (hmin = 0)] closeToGoal (|p goal NE − pNE| < closeToGoalDistance) ∧ (closeT oGoalFlag = 0) corridorObstacle waggle cruise mode enabled ∧ (df = 0) ∧ df cos(ψ − ψg) < corridorLength ∧ |df sin(ψ − ψg)| < 1 2 corridorWidth directionKnown (↔directionUnknown) avDirection = 0 farObstacle waggle cruise mode enabled ∧ (df < farObstacleDistance) ∧ (df = 0) firstAttempt (↔notFirstAttempt) attempts = 1 maxAttempts (↔moreAttempts) attempts>maxAttempts noLidarHeight (hgnd = 0) ∧ terrain following switched on outside [(do < (maxPathDistance − outsideHysteresis) since last outside event] ∨ (no previous outside event) ∧ (do > maxPathDistance) outsideFlightArea helicopter is between flight area and no fly zone safeLidarHeight (↔noSafeLidarHeight) (hgnd < safeLidarHeight) ∧ (hgnd = 0) pirouetteObstacle ([(df < farObstacleDistance ∧ (df = 0)] ∨ [(dd < heightClearanceb ) ∧ (dd = 0)])∧ pirouette descent mode or yaw mode enabled when in height change state progress (initProgressFlag = 0) ∧ (dp ≤ progressPathTolerance) ∧ [|p goal NE − pNE| < (|p goal NE − last pNE with initProgressFlag= 1| − minPathProgressDistance)] wallCatch df < wallCatchDistance ∧ (df = 0) wallFollowingProgress |p goal NE − pNE| < (↔noWallFollowingProgress) (|p goal NE − wallFollowingStart| − minWallFollowingProgressDistance) wallRelease df ≥ wallReleaseDistance ∨ (df = 0) aA condition-based event is sent when a condition starts to hold or when a condition holds at the time a corresponding query event is received. bThe heightClearance condition was introduced after we finished our experiments as we realized there are some rare cases in which the safety distance to obstacles and terrain during a height change could be less than specified. Table VIII. Calculation of variables used in state diagrams. avDirection = 1 if wrap(ψgoal − lef tObstacleAngle) > wrap(rightObstacleAngle − ψgoal) −1 else where wrap(α) = atan2(sin α, cos α) avoidanceWp = pf E + avoidanceWpDistance · sin ψa pf N + avoidanceWpDistance · cos ψa where ψa = bearingAngle + avDirection · avoidanceAngle freeSpaceWp1 = pf E + freeSpaceWpDistance · sin ψgoal pf N + freeSpaceWpDistance · cos ψgoal freeSpaceWp2 = pf E + freeSpaceWpDistance · sin ψ pf N + freeSpaceWpDistance · cos ψ leftScanAngle = ψgoal − 1 2 scanAngle rightScanAngle = ψgoal + 1 2 scanAngle ψf g = atan2(pfwp E − pf E, pfwp N − pf N) ψgoal = atan2(p goal E − pf E , p goal N − pf N) Journal of Field Robotics DOI 10.1002/rob
  • 32. 32 • Journal of Field Robotics—2013 Table IX. ESM state machine language.a ↑ triggers transition when a clock signal is received .../↑event, ... event sent during a state transitionb ∗ triggers transition if at the time a clock signal is received at least one nested state of the current superstate is a final state or all statements of the current atomic state have been processed superstate encapsulating nested states of container containerName container containerName with two orthogonal regions (two concurrent states) region encapsulating a flat state machine (no concurrent states) aThe ESM language is similar to Harel’s state charts (Harel, 1987) and the UML state machine language. The table contains only elements that are specific to the ESM language and necessary for the understanding of state diagrams used in this paper. bIn this paper, state diagrams only contain events that are necessary for the understanding of a method and events are not filtered at container boundaries. REFERENCES Andert, F., & Adolf, F. (2009). Online world modeling and path planning for an unmanned helicopter. Autonomous Robots, 27(3), 147–164. Andert, F., Adolf, F.-M., Goormann, L., & Dittrich, J. (2010). Au- tonomous vision-based helicopter flights through obstacle gates. Journal of Intelligent and Robotic Systems, 57(1-4), 259–280. Bachrach, A., He, R., Prentice, S., & Roy, N. (2011). RANGE - robust autonomous navigation in GPS-denied environ- ments. Journal of Field Robotics, 28(5), 644–666. Bachrach, A., He, R., & Roy, N. (2009). Autonomous flight in unknown indoor environments. International Journal of Micro Air Vehicles, 1(4), 217–228. Beyeler, A., Zufferey, J.-C., & Floreano, D. (2009). Vision-based control of near-obstacle flight. Autonomous Robots, 27(3), 201–219. Byrne, J., Cosgrove, M., & Mehra, R. (2006). Stereo based obsta- cle detection for an unmanned air vehicle. In Proceedings of the IEEE International Conference on Robotics and Au- tomation (ICRA) (pp. 2830–2835), Orlando, FL. Choset, H., Lynch, K., Hutchinson, S., Kantor, G., Burgard, W., Kavraki, L., & Thrun, S. (2005). Principles of Robot Motion: Theory, Algorithms, and Implementations. Cam- bridge, MA: MIT Press. Clothier, R., Baumeister, R., Br¨unig, M., Duggan, A., & Wilson, M. (2011). The Smart Skies project. IEEE Aerospace and Electronic Systems Magazine, 26(6), 14–23. Conroy, J., Gremillion, G., Ranganathan, B., & Humbert, J. (2009). Implementation of wide-field integration of optic flow for autonomous quadrotor navigation. Autonomous Robots, 27(3), 189–198. Dauer, J., Lorenz, S., & Dittrich, J. (2011). Advanced attitude command generation for a helicopter UAV in the context of a feedback free reference system. In AHS International Specialists Meeting on Unmanned Rotorcraft (pp. 1–12), Tempe, AZ. Garratt, M., & Chahl, J. (2008). Vision-based terrain following for an unmanned rotorcraft. Journal of Field Robotics, 25(4- 5), 284–301. Griffiths, S., Saunders, J., Curtis, A., Barber, B., McLain, T., & Beard, R. (2006). Maximizing miniature aerial vehicles— Obstacle and terrain avoidance for MAVs. IEEE Robotics and Automation Magazine, 13(3), 34–43. Harel, D. (1987). Statecharts: A visual formalism for complex systems. Science of Computer Programming, 8(3), 231– 274. He, R., Bachrach, A., & Roy, N. (2010). Efficient planning un- der uncertainty for a target-tracking micro aerial vehicle. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–8), Anchorage, AK. Herisse, B., Hamel, T., Mahony, R., & Russotto, F. (2010). A terrain-following control approach for a VTOL unmanned aerial vehicle using average optical flow. Autonomous Robots, 29(3-4), 381–399. Hrabar, S. (2012). An evaluation of stereo and laser-based range sensing for rotorcraft unmanned aerial vehicle obstacle avoidance. Journal of Field Robotics, 29(2), 215–239. Hrabar, S., & Sukhatme, G. (2009). Vision-based navigation through urban canyons. Journal of Field Robotics, 26(5), 431–452. Johnson, E. N., Mooney, J. G., Ong, C., Hartman, J., & Sa- hasrabudhe, V. (2011). Flight testing of nap of-the-earth unmanned helicopter systems. In Proceedings of the 67th Annual Forum of the American Helicopter Society (pp. 1–13), Virginia Beach, VA. Kendoul, F. (2012). A survey of advances in guidance, naviga- tion and control of unmanned rotorcraft systems. Journal of Field Robotics, 29(2), 315–378. Meier, L., Tanskanen, P., Heng, L., Lee, G. H., Fraundorfer, F., & Pollefeys, M. (2012). PIXHAWK: A micro aerial vehi- cle design for autonomous flight using onboard computer vision. Autonomous Robots, 33(1-2), 21–39. Journal of Field Robotics DOI 10.1002/rob
  • 33. Merz & Kendoul: Dependable Low-altitude Obstacle Avoidance for Robotic Helicopters Operating in Rural Areas • 33 Merz, T., & Kendoul, F. (2011). Beyond visual range obsta- cle avoidance and infrastructure inspection by an au- tonomous helicopter. In IEEE/RSJ International Confer- ence on Intelligent Robots and Systems (IROS) (pp. 4953– 4960), San Francisco, CA. Merz, T., Rudol, P., & Wzorek, M. (2006). Control system frame- work for autonomous robots based on extended state ma- chines. In IARIA International Conference on Autonomic and Autonomous Systems (ICAS) (pp. 1–8), Silicon Valley, CA. Montgomery, J., Johnson, A., Roumeliotis, S., & Matthies, L. (2006). The jet propulsion laboratory autonomous he- licopter testbed: A platform for planetary exploration technology research and development. Journal of Field Robotics, 23(3/4), 245–267. Ruffier, F., & Franceschini, N. (2005). Optic flow regulation: The key to aircraft automatic guidance. Robotics and Au- tonomous Systems, 50(4), 177–194. Sanfourche, M., Besnerais, G. L., Fabiani, P., Piquereau, A., & Whalley, M. (2009). Comparison of terrain characterization methods for autonomous UAVs. In Proceedings of the 65th Annual Forum of the American Helicopter Society (pp. 1– 14), Grapevine, TX. Sanfourche, M., Delaune, J., Besnerais, G. L., de Plinval, H., Israel, J., Cornic, P., Treil, A., Watanabe, Y., & Plyer, A. (2012). Perception for UAV: Vision-based navigation and environment modeling. Journal Aerospace Lab, 4, 1–19. Scherer, S., Rehder, J., Achar, S., Cover, H., Chambers, A., Nuske, S., & Singh, S. (2012). River mapping from a fly- ing robot: State estimation, river detection, and obstacle mapping. Autonomous Robots, 33(1-2), 189–214. Scherer, S., Singh, S., Chamberlain, L., & Elgersma, M. (2008). Flying fast and low among obstacles: Methodology and experiments. International Journal of Robotics Research, 27(5), 549–574. Shim, D., Chung, H., & Sastry, S. (2006). Conflict-free naviga- tion in unknown urban environments. IEEE Robotics and Automation Magazine, 13(3), 27–33. Takahashi, M., Schulein, G., & Whalley, M. (2008). Flight con- trol law design and development for an autonomous ro- torcraft. In Proceedings of the 64th Annual Forum of the American Helicopter Society (pp. 1652–1671), Montreal, Canada. Theodore, C., Rowley, D., Hubbard, D., Ansar, A., Matthies, L., Goldberg, S., & Whalley, M. (2006). Flight trials of a rotorcraft unmanned aerial vehicle landing autonomously at unprepared sites. In Proceedings of the 62nd Annual Forum of the American Helicopter Society (pp. 1–15), Phoenix, AZ. Tsenkov, P., Howlett, J., Whalley, M., Schulein, G., Takahashi, M., Rhinehart, M., & Mettler, B. (2008). A system for 3D autonomous rotorcraft navigation in urban environ- ments. In Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit (pp. 1–23), Honolulu, HI. Viquerat, A., Blackhall, L., Reid, A., Sukkarieh, S., & Brooker, G. (2007). Reactive collision avoidance for unmanned aerial vehicles using Doppler radar. In Proceedings of the Inter- national Conference on Field and Service Robotics (FSR) (pp. 245–254, Chamonix, France. Whalley, M., Takahashi, M., Tsenkov, P., & Schulein, G. (2009). Field-testing of a helicopter UAV obstacle field naviga- tion and landing system. In Proceedings of the 65th An- nual Forum of the American Helicopter Society (pp. 1–8), Grapevine, TX. William, B., Green, E., & Oh, P. (2008). Optic-flow-based colli- sion avoidance. IEEE Robotics and Automation Magazine, 15(1), 96–103. Zelenka, R., Smith, P., Coppenbarger, R., Njaka, C., & Sridhar, B. (1996). Results from the NASA automated nap-of-the- earth program. In Proceedings of the 52nd Annual Forum of the American Helicopter Society (pp. 107–115), Wash- ington, D.C. Zufferey, J.-C., & Floreano, D. (2006). Fly-inspired visual steer- ing of an ultralight indoor aircraft. IEEE Transactions On Robotics, 22(1), 137–146. Journal of Field Robotics DOI 10.1002/rob