Abstract
This paper presents a novel angle-aware coverage control method to enhance monitoring efficiency for large-scale 3D map reconstruction using drone networks. The proposed method integrates a Voronoi-based coverage control as the nominal input into the quadratic programming problem required to solve in the original angle-aware coverage control. This approach provides a practical solution, ensuring diverse viewing angles to improve the quality of 3D map reconstruction and offers effective coverage of distant unobserved areas, particularly for large-scale area missions. We then implement the present method and validate its effectiveness through simulation on Robot Operating System as well as experiments conducted on a robotic testbed. The comparative analysis demonstrates the advantages of our proposed approach, resulting in increased monitoring efficiency for large-scale 3D map reconstruction.
1. Introduction
Recently, coordinated control of drone networks has emerged as a critical factor in various applications, including gas sensing [Citation1], disaster management [Citation2,Citation3], smart agriculture [Citation4–6], and wildfire tracking [Citation7,Citation8]. By utilizing multiple drones, we can improve mission efficiency, while enhancing robustness against potential failures of individual drones. Particularly in the domain of smart agriculture, drone networks are expected to offer valuable insights to enhance crop management practices and to optimize yields through crop monitoring and inspection [Citation9,Citation10]. To achieve high-quality inspection, advanced techniques like Structure from Motion (SfM) have been employed for generating a 3D map of agricultural fields from aerial images [Citation5]. In this scenario, the drones in the network need to sample each point in the environment from diverse viewing angles, thereby guaranteeing comprehensive coverage and accurate data acquisition.
To tackle the above challenge, classical coverage control techniques [Citation11,Citation12] have been implemented to facilitate the efficient deployment of mobile sensors in a distributed manner. Alternatively, persistent coverage control [Citation13–16] algorithm has also been studied, allowing drones to patrol the environment and dynamically adjust importance indices assigned to points in the environment based on the previous coverage states. These control technologies are expected to be useful for effective image sampling for 3D map reconstruction also. However, a limitation of these approaches is its tendency to primarily sample images from the top of the canopy, thereby restricting the capture of data from diverse and rich viewing angles.
The authors of [Citation17] proposed a novel angle-aware coverage control scheme for sampling images from rich viewing angles. They formulated the coverage control problem in a 5-dimensional virtual field, incorporating a 3-dimensional set of target points for sampling and a 2-dimensional space characterizing the viewing angles. A quadratic programming (QP)-based controller also enabled adaptive drone motion, accelerating sampling in well-observed regions and decelerating otherwise. Suenaga et al. [Citation18] exemplified experimentally that the angle-aware method drastically gave better map quality than the traditional persistent coverage control [Citation16]. However, when applied to large-scale map reconstruction, the previous controller tends to delay approaching distant unobserved points, significantly impacting the overall monitoring efficiency. This drawback becomes particularly critical in monitoring larger fields.
In this paper, we extend the previous angle-aware coverage controller to address the above limitation, serving as a practical solution for large-scale map reconstruction applications. Firstly, we outline the drawbacks of [Citation17] through simulation, emphasizing its shortcomings in effectively monitoring large-scale map reconstruction. Secondly, we propose a novel QP-based controller that incorporates Voronoi-based coverage control [Citation11,Citation12] into the original angle-aware control method as a nominal control input. We demonstrate that the present controller enhances the monitoring efficiency, as observed through simulations on Robot Operating System (ROS) and experiments conducted on a robotic testbed.
2. Angle-aware coverage control
The goal of this section is to briefly review the angle-aware coverage control, which will serve as the fundamental basis for the extension that we present in this paper. We will begin by outlining the mathematical formulation of the angle-aware coverage control problem. Subsequently, we provide a summary of the solution proposed by Shimizu et al. [Citation17] and emphasize the challenges associated with this controller against a large-scale 3D map reconstruction application through simulation.
2.1. Problem formulation
The angle-aware coverage problem is defined as a task of monitoring a field using multiple drones, in order to ensure that every point within the field is observed from various viewing angles. This problem originated from efficient image sampling for 3D map reconstruction through Structure from Motion (SfM) techniques [Citation6,Citation17–20].
Consider a fleet of n drones equipped with on-board cameras whose index set is denoted by . These drones are situated in a 3D space, each having a position of relative to the world frame . The local controller maintains a constant attitude and altitude, with the altitude set to be constant at , for all the drones. Accordingly, our primary focus here is to consider only the motion of the 2D coordinates , where denotes a compact subset of the plane having a constant z-coordinate, where must lie in the set for all . Figure shows an illustration of the intended scene. Moreover, each drone adheres to the dynamics: where is the velocity input to be designed.
Let us next define a target field as a compact set containing the field's surface, as illustrated in Figure . The primary objective is to ensure that every point within the target field is observed from a diverse set of viewing angles characterized by the horizontal angle and the vertical angle in Figure . To effectively address this coverage problem, we define a virtual field consisting of the five variables, namely . Then, the drone position looking at from the viewing angle and is uniquely determined by the map: (1) (1) We assume that the monitoring performance at point by i-th drone is determined by the performance function with a design parameter as below. (2) (2) Furthermore, we define an importance index assigned to each point q in . To this end, we partition the virtual field into a group of m 5-D polygons and designate , , where refers to the 5-D coordinates of the centre of the jth cell. Additionally, each cell is assumed to have the same volume A. Consequently, we assign an importance index to each cell . Based on the same rule defined in [Citation17], is assumed to decay according to (3) (3) where is a positive scalar.
We are now ready to present the objective function to be minimized as: (4) (4) As J approaches zero, the drones are expected to capture images ideal to reconstruct the 3D map. Therefore, the primary objective is to control the drones so that J converges to zero. Moreover, a secondary objective is also added in order to specify the transient behaviour of the drones, depending on the progress of the image sampling. If a drone is in a well-observed area with a low value, they should leave the region as soon as possible. In contrast, if the drones are in a region that has not been fully observed with a high value, they should stay in the region to monitor it further.
2.2. Controller method and drawback in large-scale map reconstruction
To achieve the objectives defined in the previous section, the authors of [Citation17] presented a controller that enforces the constraint based on the concept of control barrier functions (Please refer to Appendix A for more details). Now, the time derivative of J follows (5) (5) where (6) (6) corresponds to the contribution by drone i to reduce J in (Equation4(4) (4) ). The set , which depends on , is a Voronoi-like partition of the set defined as (7) (7) Now, the number of cells m tends to be huge due to the higher dimension of the field than the standard coverage control [Citation11–16]. This makes implementation of the controller computationally very expensive. To address this issue, the drone field is discretized into l polygons, denoted as , where . We also denote the coordinates of the centre of l-th polygon by . The coverage, represented by , is then assessed by projecting it onto one of these polygons, specifically where is situated. This process leads to the transformation of into , , as governed by the following equation: (8) (8) Under the slight approximation of , the update of can be approximated as (9) (9) Here is the approximation of the performance function using indices as follows (10) (10) (Equation9(9) (9) ) means that we do need only to keep track of rather than . Consequently, in view of , the time derivative of J could also be written as (11) (11) where (12) (12) Finally, the authors of [Citation17] presented the QP-based controller (13a) (13a) (13b) (13b) where represents a slack variable (), ϵ is a positive constant that determines the strength of the penalty on constraint violations, and are defined as follows: (14a) (14a) (14b) (14b) Please refer to [Citation17] for more details of the controller.
Next, we present a simulation case study against a large-scale map reconstruction mission, and reveal a fundamental problem of the controller (Equation13a(13a) (13a) ). The subsequent simulation was conducted on ROS Noetic, solving the associated quadratic programming problem using CVXOPT [Citation21].
Let us now consider a single drone (n=1) with an initial position of . The drones' speed will be restricted by defining the input space as . Each drone will be equipped with a local controller to maintain a constant altitude of . The viewing angle space will span , while will cover the range . The target field is defined as as shown in Figure (a). The drones will navigate through the field P to monitor all positions of the target field from various angles. To achieve this, we discretize the virtual field into cells. Each cell is represented as a polyhedron with dimensions , and its volume is . In order to implement the controller (Equation13a(13a) (13a) ), the field P is also discretized into polygons, where each polygon forms a square. To determine the significance of each cell, we assign an initial value of importance indices for all and compress them into , following a distribution shown in Figure (a). To ensure smoothness, we set , effectively reducing h to almost zero at a drone's viewing range of . Other essential parameters include , , , and a=1.
Figure (b) displays snapshots of the simulation, showcasing a colour map of the field representing the importance index, ψ. Regions depicted in red indicate high importance, while blue regions indicate low importance. Upon closer examination of these snapshots, we notice that, after a while, scattered unobserved patches throughout the field become apparent. Consequently, the drone covers long distances merely to visit small pieces of unobserved areas, resulting in wasteful operations in both time and energy.
Figure illustrates the evolution of the cost function J, showing a progressive reduction of J towards zero over time. As observed in Figure (b), after approximately , the rate of decrease in J diminishes due to the frequent constraint violations that are allowed by softening the constraint associated with in (Equation13a(13a) (13a) ). This observation confirms that, after a while, the drones' efficiency decreases because they need to travel long distances to visit small unobserved areas. This can be attributed to the fact that the distance of unobserved points from the drone is not considered in the objective function. Consequently, distant unobserved points have minimal impact on the defined efficiency and are assigned lower priority. While this phenomenon may not significantly affect monitoring efficiency for smaller areas, as demonstrated in the experiment at the Tokyo Tech Robot Zoo Sky test bed [Citation18], it becomes more apparent when the field size is larger.
In the subsequent sections, we will propose a new approach to address the above issue, with the aim of enhancing the monitoring efficiency of individual drones when performing 3D map reconstruction for large-scale fields.
3. Controller design for large-scale map reconstruction
This section presents a novel controller suitable for the large-scale map reconstruction. In the previous section, we hypothesized that the small pieces of the unobserved area were due to the lack of evaluating the distance between and . We thus reflect this factor in the controller design in this section. A trivial idea could be to modify the cost function J so that it explicitly involves the distance. However, after conducting extensive research, the results did not meet our expectations for achieving efficient coverage. Specifically, the drones were stuck to specific positions without continuing exploratory actions. We thus approach the problem differently, namely we employ the nominal control input and reflect the distance in the design of . The primary motivation behind the introduction of is to enable the drones not only to reduce the objective function J in (Equation4(4) (4) ) but also to be attracted to locations with higher and denser importance values, even when these locations are distant from the drone's current position. Consequently, in the context of large-scale monitoring, the drones are expected to behave more efficiently, while also avoiding unnecessary extensive travel to inspect small unobserved areas.
Let us design the nominal controller . To this end, we focus on the following cost function that was presented in [Citation22]. (15) (15) where is called Voronoi cell and defined as (16) (16) and is defined by if . Notice that the distance between and is explicitly evaluated in the cost function . Define the centroid of the Voronoi cell by (17) (17) The gradient of (Equation15(15) (15) ) in is then shown to be given as below in [Citation22]. (18) (18) According to the above equation and the Lloyd descent algorithm [Citation23], the authors of [Citation22] presented a gradient-descent coverage controller. Consequently, the gradient-based move-to-centroid controller [Citation22] has been established to guide drone p toward a critical point of the cost function .
Based on the above investigations, we design the nominal controller as (19) (19) with k>0 is an appropriately tuned control gain. In implementation, the centroid is approximately computed by (20) (20) Note that the importance index is updated by (Equation9(9) (9) ) and hence time-varying, differently from [Citation22].
Let us next present a QP-based control suitable for a large-scale map reconstruction mission. In the presence of , the QP-based controller (Equation13a(13a) (13a) ) is transformed into the following form: (21a) (21a) (21b) (21b) In this context, , , and are defined according to (Equation14a(14a) (14a) ), (Equation14b(14b) (14b) ), and (Equation19(19) (19) ), respectively. The controller enforces constraints through the function J as described in (Equation4(4) (4) ), while is generated using the function in (Equation15(15) (15) ) that considers the distance between and .
The overall control architecture is visually depicted in Figure . The update of ψ in (Equation9(9) (9) ) is implemented on a central computer since it depends on the history of the coverage states for all drones. As was in-depth discussed in [Citation16], it does not pose any bottleneck for the implementation in many applications. Once the importance indices meeting are fed back to each drone, the remaining blocks, consisting of the computation of the nominal input in (Equation19(19) (19) ) and the solution to the QP in (Equation21a(21a) (21a) ), are implemented in a distributed fashion.
4. Demonstration on a ROS simulation
In this section, we validate the proposed controller (Equation21a(21a) (21a) ) through simulations on the same simulator as Section 2.2. In Section 4.1, we conduct a comparative analysis between our proposed controller (Equation21a(21a) (21a) ) and the previous controller (Equation13a(13a) (13a) ) in a single-drone monitoring scenario within a large-scale field. Subsequently, in Section 4.2, we extend our evaluation to demonstrate the superior performance of our proposed controller in a multi-drone setting.
Remark 4.1
In the ROS simulation, the drone dynamics are represented using the ideal mathematical model of single integrator dynamics, which is employed in the controller design.
4.1. Evaluation for a single-drone case
In this subsection, we assess the performance of the present controller when employing a single drone (n=1). All of the parameters are set to the same values as Section 2.2. This analysis offers valuable insights into the effectiveness of the angle-aware coverage controller in this specific context.
In Figure , snapshots of the simulation for both of (Equation13a(13a) (13a) ) and (Equation21a(21a) (21a) ) are presented, featuring a colour map depicting the importance index ψ. On closer inspection, it becomes apparent that the proposed controller (Equation21a(21a) (21a) ) leads the drone to cover the region in a more organized fashion, prioritizing areas nearer to the Voronoi region's centroid and gradually progressing towards other denser regions. In contrast, the original controller (Equation13a(13a) (13a) ) leads the drone to a more random coverage pattern, resulting in scattered unobserved patches across the field after a while. Consequently, the proposed controller achieves more efficient area coverage and quicker task completion compared to the previous controller (Equation13a(13a) (13a) ).
Furthermore, Figure also confirms that the proposed controller (Equation21a(21a) (21a) ) achieves a faster overall reduction rate of the cost function J while avoiding frequent constraint violations in the transient, compared to the previous controller (Equation13a(13a) (13a) ). Specifically, the controller (Equation21a(21a) (21a) ) reaches at , whereas the controller (Equation13a(13a) (13a) ) needs about for completing . Consequently, based on the above simulation, we can infer that the proposed controller not only covers the area more efficiently but also exhibits a faster reduction in J than the previous one.
Note that some readers may think that just using enough number of drones can overcome the drawback of the controller (Equation13a(13a) (13a) ) as long as the system cost would be ignorable. Actually, when we use multiple drones, the total area to be monitored by each drone gets small. It is thus beneficial to confirm that the above advantage of the proposed controller is valid for the multi-drone scenario also, which will be investigated in the next subsection.
4.2. Evaluation for a multi-drone case
In this section, we extend our evaluation to a multi-drone scenario. We aim to demonstrate the superiority of our proposed angle-aware coverage controller (Equation21a(21a) (21a) ) over the previous controller (Equation13a(13a) (13a) ) even in the presence of the multiple drones.
Let us consider three drones (n=3) located initially at the position m, m, and m, respectively. These drones have a maximum speed range of m/s for both horizontal and vertical movement. Each drone is equipped with a system to maintain a consistent altitude of 1.0 meter. Their horizontal view () covers a full circle from to π, while their vertical view () is limited to the range between and .
Figure shows the snapshot comparisons between (Equation13a(13a) (13a) ) and (Equation21a(21a) (21a) ). Due to the smaller areas to be sampled by each drone, the speed to complete the mission gets faster than the single-drone case. However, we also see that small unobserved areas remain in the left, which slows down the overall coverage in the same way as Figure . Meanwhile, drones employing our proposed controller systematically cover the region. This observation is further supported by the data in Figure , which indicates that our controller (Equation21a(21a) (21a) ) achieves faster area coverage, even faster than the specified decay rate. Consequently, it takes only about t=330 seconds to achieve , whereas the previous controller (Equation13a(13a) (13a) ) requires t=470 seconds.
In summary, we conclude that the proposed controller is beneficial even for the multi-drone case.
5. Demonstration through experiment
In this section, we showcase the effectiveness of the proposed controller (Equation21a(21a) (21a) ) through real-world experiments conducted in the Tokyo Tech Robot Zoo Sky testbed (see Figure (a)). Given the testbed's relatively small size, we purposefully adjust specific parameters to virtually create a larger area to be covered. These adjustments include lowering the drone's altitude to only of the simulation, consequently decreasing the drone's field of view area and preserving nearly the same ratio between the drone's field of view area and the total field area in both the simulation and the experiment. Moreover, we also decrease the parameter σ value to 0.7 times the simulation to minimize monitoring impact and limit the drone's speed to enhance safety.
The schematic diagram of the system in Figure (b) depicts a configuration comprising Parrot Bebop 2 drones, a desktop computer housing an Intel® CoreTM i7-8700K CPU with 6 Cores, 12 Threads, 32 GB of RAM, and laptops equipped with Intel® CoreTM i7-8650U CPUs featuring 4 Cores, 8 Threads, and 8 GB of RAM. Additionally, a motion capture system utilizing OptiTrack technology is integrated into the ROS framework. Each laptop serves as a distributed computation node, essential because the Bebop drone's onboard chip only accepts basic velocity input. This system records the drone positions at a rate of 120 frames per second and continuously transmits the data to the desktop computer. The desktop computer, in turn, updates importance indices using Equation (Equation9(9) (9) ).
Remark 5.1
In the experiment, we acknowledge the gap between the single integrator dynamics and the actual drone dynamics. To address this disparity, we designed a local velocity controller for the drone so that the drone velocity follows velocity commands from the high-level controller. Despite this remedy, the actual velocity, of course, does not coincide with the velocity command especially over a high frequency domain. On the other hand, in general, cooperative control, including coverage control, does not intend rapid control as in low-level robot motion control, e.g. in a factory. It is thus expected that the gap between the velocity and its command will not cause major problems.
To compare the proposed controller with the previous one, we employ a single drone (n=1) starting at position . The drones' speed is confined to , and each has a local controller to maintain a constant altitude of . The viewing angles cover horizontally and vertically. The target field is defined as . The virtual field is discretized into cells, each represented as a polyhedron with dimensions and volume . Additionally, the plane is divided into polygons, each forming a square. For importance indices, we start with for all and compress them into . For the other parameters, we set , , , , and a=1.
In Figure , we present snapshots from an experiment closely resembling the simulation setup discussed in Section 4. This experimental scenario also reveals a consistent pattern: the drone under the guidance of the proposed controller exhibits systematic coverage behaviour. This finding aligns with the evidence presented in Figure , highlighting the superior performance of our controller (Equation21a(21a) (21a) ). It achieves faster area coverage, completing the task in approximately t=150 seconds, while the previous controller requires approximately t=180 seconds. This difference, although less pronounced compared to the simulation with a larger field, underscores the efficiency of our approach.
It is important to note that in this experiment, we used a smaller value of σ compared to the simulation. By employing this smaller σ, we observed a reduced ability of the drone to monitor points within its field of view, making it more challenging for the drone to cover the area effectively. Additionally, as observed in the experimental result, differently from the simulation, the impact of a smaller σ, along with accounting for uncertainties in the real experiment, results in even the present algorithm generating small patches, which makes the gap less pronounced than in the simulation. However, despite this challenge, the results still demonstrate the advantages of the proposed controller over the conventional method (Equation13a(13a) (13a) ). While this experiment was conducted in a relatively small field measuring , it holds promising implications for larger-scale applications. The systematic coverage behaviour and efficiency demonstrated by our controller in this experiment suggest even more substantial benefits when applied to larger areas, particularly in the context of large-scale 3D map reconstruction. Achieving faster coverage completion also translates into potential battery savings for the drone itself, further emphasizing the practical benefits of our approach in real-world drone applications.
6. Conclusion
In this paper, we presented a novel angle-aware coverage control method aimed at enhancing coverage efficiency for large-scale 3D map reconstruction using drone networks. By integrating Voronoi-based nominal input into the existing angle-aware coverage control framework, our approach achieved a more systematic and efficient coverage pattern. This systematic coverage not only ensured diverse viewing angles, thereby improving 3D map reconstruction quality, but also extended its benefits to effectively cover distant unobserved regions, significantly enhancing monitoring time efficiency, particularly in large-scale mapping scenarios. Our proposed QP-based controller was successfully implemented, and its efficacy was demonstrated through simulation within ROS as well as real-world experiments conducted on our testbed. These results affirmed the potential of our approach to be the practical solution to positively impact monitoring efficiency in the context of large-scale 3D map reconstruction.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Funding
References
- Rutkauskas M, Asenov M, Ramamoorthy S, et al. Autonomous multi-species environmental gas sensing using drone-based Fourier-transform infrared spectroscopy. Opt Express. 2019;27(7):9578–9587. doi: 10.1364/OE.27.009578
- Qu C, Singh R, Morel AE, et al. Obstacle-aware and energy-efficient multi-drone coordination and networking for disaster response. In: 2021 17th International Conference on Network and Service Management (CNSM); IEEE; 2021. p. 446–454.
- Qu C, Sorbelli FB, Singh R, et al. Environmentally-aware and energy-efficient multi-drone coordination and networking for disaster response. IEEE Trans Netw Serv Manag. 2023.
- Albani D, Manoni T, Arik A, et al. Field coverage for weed mapping: toward experiments with a uav swarm. In: Bio-inspired Information and Communication Technologies: 11th EAI International Conference, BICT 2019, Pittsburgh, PA, USA, March 13–14, 2019, Proceedings 11; Springer; 2019. p. 132–146.
- Mammarella M, Donati C, Shimizu T, et al. 3d map reconstruction of an orchard using an angle-aware covering control strategy. IFAC-PapersOnLine. 2022;55(32):271–276. doi: 10.1016/j.ifacol.2022.11.151
- Mammarella M, Comba L, Biglia A, et al. Cooperation of unmanned systems for agricultural applications: a theoretical framework. Biosyst Eng. 2022;223:61–80. doi: 10.1016/j.biosystemseng.2021.11.008
- Seraj E, Gombolay M. Coordinated control of uavs for human-centered active sensing of wildfires. In: 2020 American control conference (ACC); IEEE; 2020. p. 1845–1852.
- Seraj E, Silva A, Gombolay M. Multi-uav planning for cooperative wildfire coverage and tracking with quality-of-service guarantees. Auton Agent Multi Agent Syst. 2022;36(2):39. doi: 10.1007/s10458-022-09566-6
- Mammarella M, Comba L, Biglia A, et al. Cooperation of unmanned systems for agricultural applications: a case study in a vineyard. Biosyst Eng. 2022;223:81–102. doi: 10.1016/j.biosystemseng.2021.12.010
- Tagarakis AC, Kalaitzidis D, Filippou E, et al. 3d scenery construction of agricultural environments for robotics awareness. In: Information and communication technologies for agriculture-theme iii: Decision. Springer; 2022. p. 125–142.
- Cortes J, Martinez S, Bullo F. Spatially-distributed coverage optimization and control with limited-range interactions. ESAIM: Contr Optim Calculus Variations. 2005;11(4):691–719.
- Schwager M, Julian BJ, Angermann M, et al. Eyes in the sky: decentralized control for the deployment of robotic camera networks. Proc IEEE. 2011;99(9):1541–1561. doi: 10.1109/JPROC.2011.2158377
- Palacios-Gasós JM, Montijano E, Sagüés C, et al. Distributed coverage estimation and control for multirobot persistent tasks. IEEE Trans Robot. 2016;32(6):1444–1460. doi: 10.1109/TRO.2016.2602383
- Sugimoto K, Hatanaka T, Fujita M, et al. Experimental study on persistent coverage control with information decay. In: 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE); IEEE; 2015. p. 164–169.
- Wang YW, Zhao MJ, Yang W, et al. Collision-free trajectory design for 2-d persistent monitoring using second-order agents. IEEE Trans Control Netw Syst. 2019;7(2):545–557. doi: 10.1109/TCNS.6509490
- Dan H, Hatanaka T, Yamauchi J, et al. Persistent object search and surveillance control with safety certificates for drone networks based on control barrier functions. Front Robot AI. 2021;333.
- Shimizu T, Yamashita S, Hatanaka T, et al. Angle-aware coverage control for 3-d map reconstruction with drone networks. IEEE Contr Syst Lett. 2021;6:1831–1836. doi: 10.1109/LCSYS.2021.3135466
- Suenaga M, Shimizu T, Hatanaka T, et al. Experimental study on angle-aware coverage control with application to 3-d visual map reconstruction. In: 2022 IEEE Conference on Control Technology and Applications (CCTA); IEEE; 2022. p. 327–333.
- Daftry S, Hoppe C, Bischof H. Building with drones: accurate 3d facade reconstruction using mavs. In: 2015 IEEE International Conference on Robotics and Automation (ICRA); IEEE; 2015. p. 3487–3494.
- Gupta SK, Shukla DP. Application of drone for landslide mapping, dimension estimation and its 3d reconstruction. J Indian Soc Remote Sens. 2018;46:903–914. doi: 10.1007/s12524-017-0727-1
- Andersen MS, Dahl J, Vandenberghe L, et al. Cvxopt: a python package for convex optimization. Available at cvxopt org. 2013;54.
- Cortes J, Martinez S, Karatas T, et al. Coverage control for mobile sensing networks. IEEE Trans Rob Autom. 2004;20(2):243–255. doi: 10.1109/TRA.2004.824698
- Lloyd S. Least squares quantization in pcm. IEEE Trans Inform Theory. 1982;28(2):129–137. doi: 10.1109/TIT.1982.1056489
- Ames AD, Xu X, Grizzle JW, et al. Control barrier function based quadratic programs for safety critical systems. IEEE Trans Automat Contr. 2016;62(8):3861–3876. doi: 10.1109/TAC.2016.2638961
Appendix 1.
Zeroing control barrier function and QP-based controller
In this Appendix, we present the precise definition of the zeroing control barrier function (ZCBF) and associated QP-based controller.
Let us consider the system (A1) (A1) where represents the system state, represents the control input, and represent the vector fields assumed to be Lipschitz continuous.
Next, we consider a continuously differentiable function and a set . The function b is said to be a ZCBF for the set if there exists a set with such that where are the Lie derivatives of along and , respectively, and α is an extended class function. Therefore, as long as b is a ZCBF, there always exists that ensures the constraint at the boundary of the set .
A set is said to be forward invariant if holds for all and for any . It is already proved in [Citation24] that the forward invariance of the set will be rendered by any Lipschitz continuous controller which satisfies the constraint . As such a controller, Ames et al. [Citation24] presented the QP-based controller for a given nominal input : The controller achieves the closest control action to the nominal one while satisfying the forward invariance of the set .