67
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Efficient angle-aware coverage control for large-scale 3D map reconstruction using drone networks

ORCID Icon, ORCID Icon, , & ORCID Icon
Pages 144-155 | Received 15 Oct 2023, Accepted 31 Jan 2024, Published online: 14 May 2024

Abstract

This paper presents a novel angle-aware coverage control method to enhance monitoring efficiency for large-scale 3D map reconstruction using drone networks. The proposed method integrates a Voronoi-based coverage control as the nominal input into the quadratic programming problem required to solve in the original angle-aware coverage control. This approach provides a practical solution, ensuring diverse viewing angles to improve the quality of 3D map reconstruction and offers effective coverage of distant unobserved areas, particularly for large-scale area missions. We then implement the present method and validate its effectiveness through simulation on Robot Operating System as well as experiments conducted on a robotic testbed. The comparative analysis demonstrates the advantages of our proposed approach, resulting in increased monitoring efficiency for large-scale 3D map reconstruction.

1. Introduction

Recently, coordinated control of drone networks has emerged as a critical factor in various applications, including gas sensing [Citation1], disaster management [Citation2,Citation3], smart agriculture [Citation4–6], and wildfire tracking [Citation7,Citation8]. By utilizing multiple drones, we can improve mission efficiency, while enhancing robustness against potential failures of individual drones. Particularly in the domain of smart agriculture, drone networks are expected to offer valuable insights to enhance crop management practices and to optimize yields through crop monitoring and inspection [Citation9,Citation10]. To achieve high-quality inspection, advanced techniques like Structure from Motion (SfM) have been employed for generating a 3D map of agricultural fields from aerial images [Citation5]. In this scenario, the drones in the network need to sample each point in the environment from diverse viewing angles, thereby guaranteeing comprehensive coverage and accurate data acquisition.

To tackle the above challenge, classical coverage control techniques [Citation11,Citation12] have been implemented to facilitate the efficient deployment of mobile sensors in a distributed manner. Alternatively, persistent coverage control [Citation13–16] algorithm has also been studied, allowing drones to patrol the environment and dynamically adjust importance indices assigned to points in the environment based on the previous coverage states. These control technologies are expected to be useful for effective image sampling for 3D map reconstruction also. However, a limitation of these approaches is its tendency to primarily sample images from the top of the canopy, thereby restricting the capture of data from diverse and rich viewing angles.

The authors of [Citation17] proposed a novel angle-aware coverage control scheme for sampling images from rich viewing angles. They formulated the coverage control problem in a 5-dimensional virtual field, incorporating a 3-dimensional set of target points for sampling and a 2-dimensional space characterizing the viewing angles. A quadratic programming (QP)-based controller also enabled adaptive drone motion, accelerating sampling in well-observed regions and decelerating otherwise. Suenaga et al. [Citation18] exemplified experimentally that the angle-aware method drastically gave better map quality than the traditional persistent coverage control [Citation16]. However, when applied to large-scale map reconstruction, the previous controller tends to delay approaching distant unobserved points, significantly impacting the overall monitoring efficiency. This drawback becomes particularly critical in monitoring larger fields.

In this paper, we extend the previous angle-aware coverage controller to address the above limitation, serving as a practical solution for large-scale map reconstruction applications. Firstly, we outline the drawbacks of [Citation17] through simulation, emphasizing its shortcomings in effectively monitoring large-scale map reconstruction. Secondly, we propose a novel QP-based controller that incorporates Voronoi-based coverage control [Citation11,Citation12] into the original angle-aware control method as a nominal control input. We demonstrate that the present controller enhances the monitoring efficiency, as observed through simulations on Robot Operating System (ROS) and experiments conducted on a robotic testbed.

2. Angle-aware coverage control

The goal of this section is to briefly review the angle-aware coverage control, which will serve as the fundamental basis for the extension that we present in this paper. We will begin by outlining the mathematical formulation of the angle-aware coverage control problem. Subsequently, we provide a summary of the solution proposed by Shimizu et al. [Citation17] and emphasize the challenges associated with this controller against a large-scale 3D map reconstruction application through simulation.

2.1. Problem formulation

The angle-aware coverage problem is defined as a task of monitoring a field using multiple drones, in order to ensure that every point within the field is observed from various viewing angles. This problem originated from efficient image sampling for 3D map reconstruction through Structure from Motion (SfM) techniques [Citation6,Citation17–20].

Consider a fleet of n drones equipped with on-board cameras whose index set is denoted by I:={1,2,,n}. These drones are situated in a 3D space, each having a position of [xi yi zi]R3 relative to the world frame Σw. The local controller maintains a constant attitude and altitude, with the altitude set to be constant at zc, for all the drones. Accordingly, our primary focus here is to consider only the motion of the 2D coordinates pi:=[xi yi]TPR2, where P denotes a compact subset of the plane having a constant z-coordinate, where pi must lie in the set P for all iI. Figure  shows an illustration of the intended scene. Moreover, each drone iI adheres to the dynamics: p˙i=ui,uiUR2, where ui is the velocity input to be designed.

Figure 1. Illustration of the angle aware coverage problem.

Figure 1. Illustration of the angle aware coverage problem.

Let us next define a target field BR3 as a compact set containing the field's surface, as illustrated in Figure . The primary objective is to ensure that every point [x y z]T within the target field B is observed from a diverse set of viewing angles characterized by the horizontal angle θhΘh[π,π) and the vertical angle θvΘv(0,π/2] in Figure . To effectively address this coverage problem, we define a virtual field QcR5 consisting of the five variables, namely (x,y,z,θh,θv). Then, the drone position piP looking at [x y z]T from the viewing angle θh and θv is uniquely determined by the map: (1) ζ:[x y z θh θv]T[x(zcz)tan(π2θv)cos(θh)y(zcz)tan(π2θv)sin(θh)].(1) We assume that the monitoring performance at point qQc by i-th drone is determined by the performance function h:P×Qc[0,1] with a design parameter σ>0 as below. (2) h(pi,q):=exp(piζ(q)22σ2).(2) Furthermore, we define an importance index assigned to each point q in Qc. To this end, we partition the virtual field Qc into a group of m 5-D polygons and designate Q:={qj}jM, M:={1,2,,m}, where qj refers to the 5-D coordinates of the centre of the jth cell. Additionally, each cell is assumed to have the same volume A. Consequently, we assign an importance index ϕj[0,) to each cell jM. Based on the same rule defined in [Citation17], ϕj is assumed to decay according to (3) ϕ˙j=δmaxiIh(pi,qj)ϕj,(3) where δ>0 is a positive scalar.

We are now ready to present the objective function to be minimized as: (4) J:=j=1mϕjA.(4) As J approaches zero, the drones are expected to capture images ideal to reconstruct the 3D map. Therefore, the primary objective is to control the drones so that J converges to zero. Moreover, a secondary objective is also added in order to specify the transient behaviour of the drones, depending on the progress of the image sampling. If a drone is in a well-observed area with a low ϕj value, they should leave the region as soon as possible. In contrast, if the drones are in a region that has not been fully observed with a high ϕj value, they should stay in the region to monitor it further.

2.2. Controller method and drawback in large-scale map reconstruction

To achieve the objectives defined in the previous section, the authors of [Citation17] presented a controller that enforces the constraint J˙γ based on the concept of control barrier functions (Please refer to Appendix A for more details). Now, the time derivative of J follows (5) J˙=j=1mϕ˙jA=j=1mδmaxiIh(pi,qj)ϕjA=i=1njVi(p)δh(pi,qj)ϕjA=i=1nIi.(5) where (6) Ii:=jVi(p)δh(pi,qj)ϕjA(6) corresponds to the contribution by drone i to reduce J in (Equation4). The set Vi(p), which depends on p:=(pi)iI, is a Voronoi-like partition of the set M defined as (7) Vi(p):={jM|piζ(qj)pkζ(qj)  kI}.(7) Now, the number of cells m tends to be huge due to the higher dimension of the field Qc than the standard coverage control [Citation11–16]. This makes implementation of the controller computationally very expensive. To address this issue, the drone field P is discretized into l polygons, denoted as {Pk|kL}, where L:={1,,l}. We also denote the coordinates of the centre of l-th polygon by χkPkP. The coverage, represented by qj, is then assessed by projecting it onto one of these polygons, specifically Pk where ζ(qj) is situated. This process leads to the transformation of ϕ:=(ϕj)jM into ψ:=(ψk)kL, ψk[0,), as governed by the following equation: (8) ψk=jMs.t. ζ(qj)Pkϕj.(8) Under the slight approximation of ζ(qj)χk, the update of ψk can be approximated as (9) ψ˙k=jMs.t. ζ(qj)Pkϕ˙j=jMs.t. ζ(qj)PkδmaxiIh(pi,qj)ϕjδmaxiIh¯(pi,χk)ψk.(9) Here h¯ is the approximation of the performance function using indices χk as follows (10) h¯(pi,χk):=exp(piχk22σ2).(10) (Equation9) means that we do need only to keep track of ψk, k=1,2,,l rather than ϕj, j=1,2,,m. Consequently, in view of J=kψkA, the time derivative of J could also be written as (11) J˙=k=1lψ˙kAi=1nkV¯i(p)δh¯(pi,χk)ψkA,(11) where (12) V¯i(p):={kL|piχkpιχk  ιI}.(12) Finally, the authors of [Citation17] presented the QP-based controller (13a) (ui,wi)=argmin(ui,wi)U×Rϵui2+|wi|2,(13a) (13b) s.t.ξ¯1iTui+ξ¯2iwi(13b) where wi represents a slack variable (wi0), ϵ is a positive constant that determines the strength of the penalty on constraint violations, and (ξ¯1i,ξ¯2i) are defined as follows: (14a) ξ¯1i:=kV¯i(p)δpiχkσ2h¯(pi,χk)ψkA,(14a) (14b) ξ¯2i:=+kV¯i(p)(δ2h¯2(pi,χk)+h¯(pi,χk))ψkA.(14b) Please refer to [Citation17] for more details of the controller.

Next, we present a simulation case study against a large-scale map reconstruction mission, and reveal a fundamental problem of the controller (Equation13a). The subsequent simulation was conducted on ROS Noetic, solving the associated quadratic programming problem using CVXOPT [Citation21].

Let us now consider a single drone (n=1) with an initial position of p1=[1.0 0.2]Tm. The drones' speed will be restricted by defining the input space U as [0.6,0.6]m/s×[0.6,0.6]m/s. Each drone will be equipped with a local controller to maintain a constant altitude of 1.0m. The viewing angle space Θh will span [π,π), while Θv will cover the range [π3,π2]. The target field is defined as [2.5,2.5]m×[5.0,5.0]m×[0.0,0.5]m as shown in Figure (a). The drones will navigate through the 6.0m×12.0m field P to monitor all positions of the target field from various angles. To achieve this, we discretize the virtual field Qc into m=1.5×107 cells. Each cell is represented as a polyhedron with dimensions 0.02m×0.02m×0.1m×π30rad×π30rad, and its volume is A=4π29×107m3rad2. In order to implement the controller (Equation13a), the field P is also discretized into l=1.0×104 polygons, where each polygon forms a 0.03m×0.03m square. To determine the significance of each cell, we assign an initial value of importance indices ϕj=1 for all jM and compress them into ψk, following a distribution shown in Figure (a). To ensure smoothness, we set σ=0.1, effectively reducing h to almost zero at a drone's viewing range of 1.0m. Other essential parameters include ϵ=0.0001, γ=0.15, δ=5, and a=1.

Figure 2. ROS simulation of angle-aware coverage controller (Equation13a). This simulation can be seen here. (a) Target Field and (b) Evolution of the importance function ψ.

Figure 2. ROS simulation of angle-aware coverage controller (Equation13a(13a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui‖2+|wi|2,(13a) ). This simulation can be seen here. (a) Target Field and (b) Evolution of the importance function ψ.

Figure (b) displays snapshots of the simulation, showcasing a colour map of the field representing the importance index, ψ. Regions depicted in red indicate high importance, while blue regions indicate low importance. Upon closer examination of these snapshots, we notice that, after a while, scattered unobserved patches throughout the field become apparent. Consequently, the drone covers long distances merely to visit small pieces of unobserved areas, resulting in wasteful operations in both time and energy.

Figure  illustrates the evolution of the cost function J, showing a progressive reduction of J towards zero over time. As observed in Figure (b), after approximately t=600s , the rate of decrease in J diminishes due to the frequent constraint violations that are allowed by softening the constraint associated with J˙γ in (Equation13a). This observation confirms that, after a while, the drones' efficiency decreases because they need to travel long distances to visit small unobserved areas. This can be attributed to the fact that the distance of unobserved points from the drone is not considered in the objective function. Consequently, distant unobserved points have minimal impact on the defined efficiency and are assigned lower priority. While this phenomenon may not significantly affect monitoring efficiency for smaller areas, as demonstrated in the experiment at the Tokyo Tech Robot Zoo Sky test bed [Citation18], it becomes more apparent when the field size is larger.

Figure 3. Cost function J for monitoring using angle-aware coverage controller (Equation13a). The red dashed line represents a line with the slope-γ.

Figure 3. Cost function J for monitoring using angle-aware coverage controller (Equation13a(13a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui‖2+|wi|2,(13a) ). The red dashed line represents a line with the slope-γ.

In the subsequent sections, we will propose a new approach to address the above issue, with the aim of enhancing the monitoring efficiency of individual drones when performing 3D map reconstruction for large-scale fields.

3. Controller design for large-scale map reconstruction

This section presents a novel controller suitable for the large-scale map reconstruction. In the previous section, we hypothesized that the small pieces of the unobserved area were due to the lack of evaluating the distance between χk,k=1,2,,l and pi. We thus reflect this factor in the controller design in this section. A trivial idea could be to modify the cost function J so that it explicitly involves the distance. However, after conducting extensive research, the results did not meet our expectations for achieving efficient coverage. Specifically, the drones were stuck to specific positions without continuing exploratory actions. We thus approach the problem differently, namely we employ the nominal control input unom and reflect the distance in the design of unom. The primary motivation behind the introduction of unom is to enable the drones not only to reduce the objective function J in (Equation4) but also to be attracted to locations with higher and denser importance values, even when these locations are distant from the drone's current position. Consequently, in the context of large-scale monitoring, the drones are expected to behave more efficiently, while also avoiding unnecessary extensive travel to inspect small unobserved areas.

Let us design the nominal controller unom. To this end, we focus on the following cost function that was presented in [Citation22]. (15) Jc=qP(miniIpiq2)ψc(q)dq=i=1nqVic(p)piq2ψc(q)dq,(15) where Vic(p) is called Voronoi cell and defined as (16) Vic(p):={qP|piqpkq  kI},(16) and ψc:P[0,) is defined by ψc(q)=ψk if qPk. Notice that the distance between pi and qP is explicitly evaluated in the cost function Jc. Define the centroid of the Voronoi cell Vic(p) by (17) cent(Vic(p))=Vic(p)qψc(q)dqVic(p)ψc(q)dq.(17) The gradient of (Equation15) in pi is then shown to be given as below in [Citation22]. (18) Jpi=(Vic(p)ψ(q)dq)(picent(Vic(p)))(18) According to the above equation and the Lloyd descent algorithm [Citation23], the authors of [Citation22] presented a gradient-descent coverage controller. Consequently, the gradient-based move-to-centroid controller [Citation22] has been established to guide drone p toward a critical point of the cost function Jc.

Based on the above investigations, we design the nominal controller unom=[unom,1T unom,2T  unom,nT]T as (19) unom,i=k(picent(Vic(p))),iI(19) with k>0 is an appropriately tuned control gain. In implementation, the centroid is approximately computed by (20) cent(Vic(p))k:χkVic(p)ψkχk.(20) Note that the importance index ψk is updated by (Equation9) and hence time-varying, differently from [Citation22].

Let us next present a QP-based control suitable for a large-scale map reconstruction mission. In the presence of unom, the QP-based controller (Equation13a) is transformed into the following form: (21a) (ui,wi)=argmin(ui,wi)U×Rϵuiunom,i2+|wi|2,(21a) (21b) s.t.ξ¯1iTui+ξ¯2iwi.(21b) In this context, ξ¯1i, ξ¯2i, and unom,i are defined according to (Equation14a), (Equation14b), and (Equation19), respectively. The controller enforces constraints through the function J as described in (Equation4), while unom is generated using the function Jc in (Equation15) that considers the distance between qP and pi.

The overall control architecture is visually depicted in Figure . The update of ψ in (Equation9) is implemented on a central computer since it depends on the history of the coverage states for all drones. As was in-depth discussed in [Citation16], it does not pose any bottleneck for the implementation in many applications. Once the importance indices ψk meeting χkVic(p) are fed back to each drone, the remaining blocks, consisting of the computation of the nominal input unom,i in (Equation19) and the solution to the QP in (Equation21a), are implemented in a distributed fashion.

Figure 4. Overall control architecture.

Figure 4. Overall control architecture.

4. Demonstration on a ROS simulation

In this section, we validate the proposed controller (Equation21a) through simulations on the same simulator as Section 2.2. In Section 4.1, we conduct a comparative analysis between our proposed controller (Equation21a) and the previous controller (Equation13a) in a single-drone monitoring scenario within a large-scale field. Subsequently, in Section 4.2, we extend our evaluation to demonstrate the superior performance of our proposed controller in a multi-drone setting.

Remark 4.1

In the ROS simulation, the drone dynamics are represented using the ideal mathematical model of single integrator dynamics, which is employed in the controller design.

4.1. Evaluation for a single-drone case

In this subsection, we assess the performance of the present controller when employing a single drone (n=1). All of the parameters are set to the same values as Section 2.2. This analysis offers valuable insights into the effectiveness of the angle-aware coverage controller in this specific context.

In Figure , snapshots of the simulation for both of (Equation13a) and (Equation21a) are presented, featuring a colour map depicting the importance index ψ. On closer inspection, it becomes apparent that the proposed controller (Equation21a) leads the drone to cover the region in a more organized fashion, prioritizing areas nearer to the Voronoi region's centroid and gradually progressing towards other denser regions. In contrast, the original controller (Equation13a) leads the drone to a more random coverage pattern, resulting in scattered unobserved patches across the field after a while. Consequently, the proposed controller achieves more efficient area coverage and quicker task completion compared to the previous controller (Equation13a).

Figure 5. Snapshots of the colour maps on the importance index ψ for the controller (Equation13a) (left side) and the controller (Equation21a) (right side) using a single drone on the ROS simulation. The movie of this simulation can be seen here.

Figure 5. Snapshots of the colour maps on the importance index ψ for the controller (Equation13a(13a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui‖2+|wi|2,(13a) ) (left side) and the controller (Equation21a(21a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui−unom,i‖2+|wi|2,(21a) ) (right side) using a single drone on the ROS simulation. The movie of this simulation can be seen here.

Furthermore, Figure  also confirms that the proposed controller (Equation21a) achieves a faster overall reduction rate of the cost function J while avoiding frequent constraint violations in the transient, compared to the previous controller (Equation13a). Specifically, the controller (Equation21a) reaches J0 at t900s, whereas the controller (Equation13a) needs about t1300s for completing J0. Consequently, based on the above simulation, we can infer that the proposed controller not only covers the area more efficiently but also exhibits a faster reduction in J than the previous one.

Figure 6. Comparison of cost function J between the controller (Equation13a) (blue line) and the proposed controller (Equation21a) (green line) using a single-drone. The red dashed line represents a line with the slope γ.

Figure 6. Comparison of cost function J between the controller (Equation13a(13a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui‖2+|wi|2,(13a) ) (blue line) and the proposed controller (Equation21a(21a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui−unom,i‖2+|wi|2,(21a) ) (green line) using a single-drone. The red dashed line represents a line with the slope −γ.

Note that some readers may think that just using enough number of drones can overcome the drawback of the controller (Equation13a) as long as the system cost would be ignorable. Actually, when we use multiple drones, the total area to be monitored by each drone gets small. It is thus beneficial to confirm that the above advantage of the proposed controller is valid for the multi-drone scenario also, which will be investigated in the next subsection.

4.2. Evaluation for a multi-drone case

In this section, we extend our evaluation to a multi-drone scenario. We aim to demonstrate the superiority of our proposed angle-aware coverage controller (Equation21a) over the previous controller (Equation13a) even in the presence of the multiple drones.

Let us consider three drones (n=3) located initially at the position p1=[1.0 0.2]Tm, p2=[1.0 2.0]Tm, and p3=[1.0 2.0]Tm, respectively. These drones have a maximum speed range of [0.6,0.6]m/s for both horizontal and vertical movement. Each drone is equipped with a system to maintain a consistent altitude of 1.0 meter. Their horizontal view (Θh) covers a full circle from π to π, while their vertical view (Θv) is limited to the range between π3 and π2.

Figure  shows the snapshot comparisons between (Equation13a) and (Equation21a). Due to the smaller areas to be sampled by each drone, the speed to complete the mission gets faster than the single-drone case. However, we also see that small unobserved areas remain in the left, which slows down the overall coverage in the same way as Figure . Meanwhile, drones employing our proposed controller systematically cover the region. This observation is further supported by the data in Figure , which indicates that our controller (Equation21a) achieves faster area coverage, even faster than the specified decay rate. Consequently, it takes only about t=330 seconds to achieve J0, whereas the previous controller (Equation13a) requires t=470 seconds.

Figure 7. Snapshots of the colour maps on the importance index ψ for the controller (Equation13a) (left side) and the controller (Equation21a) (right side) using multiple drones on the ROS simulation. The movie of this simulation can be seen here.

Figure 7. Snapshots of the colour maps on the importance index ψ for the controller (Equation13a(13a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui‖2+|wi|2,(13a) ) (left side) and the controller (Equation21a(21a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui−unom,i‖2+|wi|2,(21a) ) (right side) using multiple drones on the ROS simulation. The movie of this simulation can be seen here.

Figure 8. Comparison of cost function J between the controller (Equation13a) (blue line) and the proposed controller (Equation21a) (green line) using multiple drones. The red dashed line represents a line with the slope-3γ.

Figure 8. Comparison of cost function J between the controller (Equation13a(13a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui‖2+|wi|2,(13a) ) (blue line) and the proposed controller (Equation21a(21a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui−unom,i‖2+|wi|2,(21a) ) (green line) using multiple drones. The red dashed line represents a line with the slope-3γ.

In summary, we conclude that the proposed controller is beneficial even for the multi-drone case.

5. Demonstration through experiment

In this section, we showcase the effectiveness of the proposed controller (Equation21a) through real-world experiments conducted in the Tokyo Tech Robot Zoo Sky testbed (see Figure (a)). Given the testbed's relatively small size, we purposefully adjust specific parameters to virtually create a larger area to be covered. These adjustments include lowering the drone's altitude to only 30% of the simulation, consequently decreasing the drone's field of view area and preserving nearly the same ratio between the drone's field of view area and the total field area in both the simulation and the experiment. Moreover, we also decrease the parameter σ value to 0.7 times the simulation to minimize monitoring impact and limit the drone's speed to enhance safety.

Figure 9. Overview of the experimental room and the schematic of the experiment. (a) Overview of the experimental room and (b) Schematic diagram of the experiment.

Figure 9. Overview of the experimental room and the schematic of the experiment. (a) Overview of the experimental room and (b) Schematic diagram of the experiment.

The schematic diagram of the system in Figure (b) depicts a configuration comprising Parrot Bebop 2 drones, a desktop computer housing an Intel® CoreTM i7-8700K CPU with 6 Cores, 12 Threads, 32 GB of RAM, and laptops equipped with Intel® CoreTM i7-8650U CPUs featuring 4 Cores, 8 Threads, and 8 GB of RAM. Additionally, a motion capture system utilizing OptiTrack technology is integrated into the ROS framework. Each laptop serves as a distributed computation node, essential because the Bebop drone's onboard chip only accepts basic velocity input. This system records the drone positions at a rate of 120 frames per second and continuously transmits the data to the desktop computer. The desktop computer, in turn, updates importance indices ψk using Equation (Equation9).

Remark 5.1

In the experiment, we acknowledge the gap between the single integrator dynamics and the actual drone dynamics. To address this disparity, we designed a local velocity controller for the drone so that the drone velocity follows velocity commands from the high-level controller. Despite this remedy, the actual velocity, of course, does not coincide with the velocity command especially over a high frequency domain. On the other hand, in general, cooperative control, including coverage control, does not intend rapid control as in low-level robot motion control, e.g. in a factory. It is thus expected that the gap between the velocity and its command will not cause major problems.

To compare the proposed controller with the previous one, we employ a single drone (n=1) starting at position p1=[1.0 0.2]Tm. The drones' speed is confined to [0.2,0.2]m/s×[0.2,0.2]m/s, and each has a local controller to maintain a constant altitude of 0.3m. The viewing angles cover [π,π) horizontally and [π3,π2] vertically. The target field is defined as [1.1,1.1]m×[1.1,1.1]m×[0.0,0.5]m. The virtual field Qc is discretized into 1.5×107 cells, each represented as a polyhedron with dimensions 0.02m×0.02m×0.1m×π30rad×π30rad and volume A=4π29×107m3rad2. Additionally, the plane P is divided into 1.0×104 polygons, each forming a 0.03m×0.03m square. For importance indices, we start with ϕj=1 for all jM and compress them into ψk. For the other parameters, we set σ=0.07, ϵ=0.0001, γ=0.15, δ=5, and a=1.

In Figure , we present snapshots from an experiment closely resembling the simulation setup discussed in Section 4. This experimental scenario also reveals a consistent pattern: the drone under the guidance of the proposed controller exhibits systematic coverage behaviour. This finding aligns with the evidence presented in Figure , highlighting the superior performance of our controller (Equation21a). It achieves faster area coverage, completing the task in approximately t=150 seconds, while the previous controller requires approximately t=180 seconds. This difference, although less pronounced compared to the simulation with a larger field, underscores the efficiency of our approach.

Figure 10. Snapshots of the colour maps on the importance index ψ for the controller (Equation13a) (left side) and the controller (Equation21a) (right side) in the experiments. The video can be seen here.

Figure 10. Snapshots of the colour maps on the importance index ψ for the controller (Equation13a(13a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui‖2+|wi|2,(13a) ) (left side) and the controller (Equation21a(21a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui−unom,i‖2+|wi|2,(21a) ) (right side) in the experiments. The video can be seen here.

Figure 11. Comparison of cost function J between the controller (Equation13a) (blue line) and the proposed controller (Equation21a) (green line) using a single-drone.

Figure 11. Comparison of cost function J between the controller (Equation13a(13a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui‖2+|wi|2,(13a) ) (blue line) and the proposed controller (Equation21a(21a) (ui∗,wi∗)=argmin(ui,wi)∈U×Rϵ‖ui−unom,i‖2+|wi|2,(21a) ) (green line) using a single-drone.

It is important to note that in this experiment, we used a smaller value of σ compared to the simulation. By employing this smaller σ, we observed a reduced ability of the drone to monitor points within its field of view, making it more challenging for the drone to cover the area effectively. Additionally, as observed in the experimental result, differently from the simulation, the impact of a smaller σ, along with accounting for uncertainties in the real experiment, results in even the present algorithm generating small patches, which makes the gap less pronounced than in the simulation. However, despite this challenge, the results still demonstrate the advantages of the proposed controller over the conventional method (Equation13a). While this experiment was conducted in a relatively small field measuring 2.2m×2.2m, it holds promising implications for larger-scale applications. The systematic coverage behaviour and efficiency demonstrated by our controller in this experiment suggest even more substantial benefits when applied to larger areas, particularly in the context of large-scale 3D map reconstruction. Achieving faster coverage completion also translates into potential battery savings for the drone itself, further emphasizing the practical benefits of our approach in real-world drone applications.

6. Conclusion

In this paper, we presented a novel angle-aware coverage control method aimed at enhancing coverage efficiency for large-scale 3D map reconstruction using drone networks. By integrating Voronoi-based nominal input into the existing angle-aware coverage control framework, our approach achieved a more systematic and efficient coverage pattern. This systematic coverage not only ensured diverse viewing angles, thereby improving 3D map reconstruction quality, but also extended its benefits to effectively cover distant unobserved regions, significantly enhancing monitoring time efficiency, particularly in large-scale mapping scenarios. Our proposed QP-based controller was successfully implemented, and its efficacy was demonstrated through simulation within ROS as well as real-world experiments conducted on our testbed. These results affirmed the potential of our approach to be the practical solution to positively impact monitoring efficiency in the context of large-scale 3D map reconstruction.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was partially funded by Japan Society for the Promotion of Science (JSPS) KAKENHI [grant number 21K04104].

Unknown widget #5d0ef076-e0a7-421c-8315-2b007028953f

of type scholix-links

References

  • Rutkauskas M, Asenov M, Ramamoorthy S, et al. Autonomous multi-species environmental gas sensing using drone-based Fourier-transform infrared spectroscopy. Opt Express. 2019;27(7):9578–9587. doi: 10.1364/OE.27.009578
  • Qu C, Singh R, Morel AE, et al. Obstacle-aware and energy-efficient multi-drone coordination and networking for disaster response. In: 2021 17th International Conference on Network and Service Management (CNSM); IEEE; 2021. p. 446–454.
  • Qu C, Sorbelli FB, Singh R, et al. Environmentally-aware and energy-efficient multi-drone coordination and networking for disaster response. IEEE Trans Netw Serv Manag. 2023.
  • Albani D, Manoni T, Arik A, et al. Field coverage for weed mapping: toward experiments with a uav swarm. In: Bio-inspired Information and Communication Technologies: 11th EAI International Conference, BICT 2019, Pittsburgh, PA, USA, March 13–14, 2019, Proceedings 11; Springer; 2019. p. 132–146.
  • Mammarella M, Donati C, Shimizu T, et al. 3d map reconstruction of an orchard using an angle-aware covering control strategy. IFAC-PapersOnLine. 2022;55(32):271–276. doi: 10.1016/j.ifacol.2022.11.151
  • Mammarella M, Comba L, Biglia A, et al. Cooperation of unmanned systems for agricultural applications: a theoretical framework. Biosyst Eng. 2022;223:61–80. doi: 10.1016/j.biosystemseng.2021.11.008
  • Seraj E, Gombolay M. Coordinated control of uavs for human-centered active sensing of wildfires. In: 2020 American control conference (ACC); IEEE; 2020. p. 1845–1852.
  • Seraj E, Silva A, Gombolay M. Multi-uav planning for cooperative wildfire coverage and tracking with quality-of-service guarantees. Auton Agent Multi Agent Syst. 2022;36(2):39. doi: 10.1007/s10458-022-09566-6
  • Mammarella M, Comba L, Biglia A, et al. Cooperation of unmanned systems for agricultural applications: a case study in a vineyard. Biosyst Eng. 2022;223:81–102. doi: 10.1016/j.biosystemseng.2021.12.010
  • Tagarakis AC, Kalaitzidis D, Filippou E, et al. 3d scenery construction of agricultural environments for robotics awareness. In: Information and communication technologies for agriculture-theme iii: Decision. Springer; 2022. p. 125–142.
  • Cortes J, Martinez S, Bullo F. Spatially-distributed coverage optimization and control with limited-range interactions. ESAIM: Contr Optim Calculus Variations. 2005;11(4):691–719.
  • Schwager M, Julian BJ, Angermann M, et al. Eyes in the sky: decentralized control for the deployment of robotic camera networks. Proc IEEE. 2011;99(9):1541–1561. doi: 10.1109/JPROC.2011.2158377
  • Palacios-Gasós JM, Montijano E, Sagüés C, et al. Distributed coverage estimation and control for multirobot persistent tasks. IEEE Trans Robot. 2016;32(6):1444–1460. doi: 10.1109/TRO.2016.2602383
  • Sugimoto K, Hatanaka T, Fujita M, et al. Experimental study on persistent coverage control with information decay. In: 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE); IEEE; 2015. p. 164–169.
  • Wang YW, Zhao MJ, Yang W, et al. Collision-free trajectory design for 2-d persistent monitoring using second-order agents. IEEE Trans Control Netw Syst. 2019;7(2):545–557. doi: 10.1109/TCNS.6509490
  • Dan H, Hatanaka T, Yamauchi J, et al. Persistent object search and surveillance control with safety certificates for drone networks based on control barrier functions. Front Robot AI. 2021;333.
  • Shimizu T, Yamashita S, Hatanaka T, et al. Angle-aware coverage control for 3-d map reconstruction with drone networks. IEEE Contr Syst Lett. 2021;6:1831–1836. doi: 10.1109/LCSYS.2021.3135466
  • Suenaga M, Shimizu T, Hatanaka T, et al. Experimental study on angle-aware coverage control with application to 3-d visual map reconstruction. In: 2022 IEEE Conference on Control Technology and Applications (CCTA); IEEE; 2022. p. 327–333.
  • Daftry S, Hoppe C, Bischof H. Building with drones: accurate 3d facade reconstruction using mavs. In: 2015 IEEE International Conference on Robotics and Automation (ICRA); IEEE; 2015. p. 3487–3494.
  • Gupta SK, Shukla DP. Application of drone for landslide mapping, dimension estimation and its 3d reconstruction. J Indian Soc Remote Sens. 2018;46:903–914. doi: 10.1007/s12524-017-0727-1
  • Andersen MS, Dahl J, Vandenberghe L, et al. Cvxopt: a python package for convex optimization. Available at cvxopt org. 2013;54.
  • Cortes J, Martinez S, Karatas T, et al. Coverage control for mobile sensing networks. IEEE Trans Rob Autom. 2004;20(2):243–255. doi: 10.1109/TRA.2004.824698
  • Lloyd S. Least squares quantization in pcm. IEEE Trans Inform Theory. 1982;28(2):129–137. doi: 10.1109/TIT.1982.1056489
  • Ames AD, Xu X, Grizzle JW, et al. Control barrier function based quadratic programs for safety critical systems. IEEE Trans Automat Contr. 2016;62(8):3861–3876. doi: 10.1109/TAC.2016.2638961

Appendix 1.

Zeroing control barrier function and QP-based controller

In this Appendix, we present the precise definition of the zeroing control barrier function (ZCBF) and associated QP-based controller.

Let us consider the system (A1) x˙=f(x)+g(x)u,(A1) where xRN represents the system state, uURM represents the control input, f:RNRN and g:RNRN×M represent the vector fields assumed to be Lipschitz continuous.

Next, we consider a continuously differentiable function b:RNR and a set C:={xRN|b(x)0}. The function b is said to be a ZCBF for the set C if there exists a set D with CDRN such that supuU[Lfb(x)+Lgb(x)u+α(b(x))]0 xD, where Lfb(x),Lg(b(x)) are the Lie derivatives of b(x) along f(x) and g(x), respectively, and α is an extended class K function. Therefore, as long as b is a ZCBF, there always exists uU that ensures the constraint xC at the boundary of the set C.

A set SRN is said to be forward invariant if x(t)S holds for all t[t0,t1] and for any x(t0)S. It is already proved in [Citation24] that the forward invariance of the set C will be rendered by any Lipschitz continuous controller u(x) which satisfies the constraint Lfb(x)+Lgb(x)u+α(b(x))0. As such a controller, Ames et al. [Citation24] presented the QP-based controller for a given nominal input unom: u(x)=argminuUuunom2s.t. Lfb(x)+Lgb(x)u+α(b(x))0. The controller achieves the closest control action to the nominal one while satisfying the forward invariance of the set C.