Introduction

Use of LiDAR in nature conservation and forestry

Light detection and ranging (LiDAR) remote sensing is a well established tool in the assessment of biodiversity and nature conservation. From the raw 3D pointclouds, a large variety of information can be derived helping to explain and manage critical environmental processes like shrub encroachment in grasslands (Madsen et al. 2020), forest health assessment (Duncanson and Dubayah 2018) or the monitoring of wetland loss (Montgomery et al. 2019). One of the most common applications of LiDAR data is the description of the structure of landscapes, which then serve as drivers for species occurrence (Carrasco et al. 2019; Melin et al. 2016; Froidevaux et al. 2016) or as indicators for biodiversity (Hilmers et al. 2018; Simonson, Allen, and Coomes 2014).

Especially forest environments are a very popular target for analysis with LiDAR data (Beland et al. 2019). Structural information can be derived at the level of individual tree groups (Jeronimo et al. 2018) or individual crowns in sparse forest stands (Silva et al. 2016). However a more operational and well established approach is, to transfer the structural information of the 3D pointcloud into a regular 2D grid, commonly referred to as LiDAR indices. The vegetation structure then is represented e.g. as the vertical distribution or the maximum height of points in one grid cell (Bakx et al. 2019). In order to preserve some of the original 3D information, there are indices which work on voxel (i.e. horizontal slices of the pointcloud) and can represent different strata of the forest canopy (e.g. Alexander et al. 2014). The reliability of LiDAR indices is usually shown by referring them to field measurements of established forest structural indices like canopy cover and canopy height (J. Lee et al. 2018; Alexander et al. 2014), stand density (C.-C. Lee and Wang 2018) or leaf area (Kamoske et al. 2019).

Despite their relevance in conservation and research, LiDAR data has some major drawbacks, mainly in their cost and accessibility. Professional LiDAR sensors and airborne data acquisition are expensive and often distributed commercially. Data provided by governmental institutions is still often not publicly distributed. Further, the temporal resolution of the data is low making them not suitable for monitoring or applications which require different seasonal conditions. Researchers also have no control over the acquisition time, however the environmental condition (e.g. leaf-on or leaf-off) has a direct impact on the derived LiDAR indices and therefore the product quality (Davison, Donoghue, and Galiatsatos 2020). It can further be beneficial to combine leaf off and leaf on LiDAR data, e.g. for improving the model quality of tree species classifications (Shi et al. 2018).

Lately, Unoccupied Aerial Systems (UAS, (Joyce, Anderson, and Bartolo 2021)) underwent a rise in popularity for environmental research. Especially consumer grade drones can help to overcome some shortcomings of airborne surveys. Quick and flexible imagery access in moderately large areas make UAS promising for the monitoring of agricultural or natural systems (Manfreda et al. 2018; Fallati et al. 2020). Depending on flight conditions, data acquisition could be on a near daily basis. This opens up the possibility of surveying multi-temporal landscape features like plant phenology which improves classifications of vegetation types (van Iersel et al. 2018).
In conjunction with LiDAR sensors, UAS have found use in all major environmental science domains e.g. landslide monitoring (Pellicani et al. 2019), savanna bush encroachment (Madsen et al. 2020), forest inventory (Wallace et al. 2012; Wallace, Lucieer, and Watson 2014) or land vegetation monitoring (Sankey et al. 2018). Unfortunately, the cost of a LiDAR UAS currently exceeds the low budged approach many researchers are looking for.
With recent improvements in photogrammetric aerial image processing, an alternative to LiDAR pointclouds is available. In overlapping images with a camera (RGB, multi- or hyperspectral), the structural information can be derived by detecting distinct features in the individual images and projecting them into the 3D space (Iglhaut et al. 2019). Software packages like Agisoft Metashape or OpenDroneMap, make operational photogrammetic workflows possible (Ludwig et al. 2020) which increases the availability of Digital Aerial Photogrammetry (DAP) pointclouds for a wide variety of applications. Especially the estimation of canopy heights in forest (Fawcett et al. 2019; Krause et al. 2019; Michez et al. 2020; Ganz, Käber, and Adler 2019) and agriculture (Grüner, Astor, and Wachendorf 2019) compare very well to field measurements and estimations from LiDAR or Terrestrial Laser Scanning (Malambo et al. 2018). The straightforward data access of DAP is beneficial for the monitoring of tree growth rates (Guerra-Hernández et al. 2017) and crops (Moeckel et al. 2018). Previous comparisons also revealed the potential of DAP pointclouds as a substitude for LiDAR when estimating common forest attributes (e.g. Ullah et al. 2019; Cao et al. 2019) and to a lesser extent biomass estimations in the tropics (Ota et al. 2015). A systematic comparison between both data types is still missing.

A systematic comparison of phenological stages

One major drawback of DAP pointclouds is, that they only contain surface points which are visible in the individual images. In forest environments, it is very unlikely that points below the canopy are surveyed. The selling feature of LiDAR pointclouds on the other hand is the ability of the laser scanner to reaches below the canopy. This information can be used to e.g. to differentiate between ground and non-ground point or assess different strata of the forest. With the combination of DAP pointclouds from different phenological stages (leaf-off and leaf-on) this drawback could be negated.
This study demonstrates the use of pointclouds derived from digital aerial photogrammetry for forest structural analysis. Commonly applied forest structural indicators will be compared between DAP and LiDAR pointclouds for different spatial scales and different phenological stages of a deciduous forest. In addition we propose the combination of multiple DAP pointclouds as a way to improve their information value and better comparability to LiDAR data.

Methods

Study Area

As the study area serves a 200 x 200 m part of a mixed deciduous forest near Marburg (Germany). The area consists of oaks () and beeches () and represent a typical stand in a managed deciduous forest. Terrain elevation ranges from 250 m to 275 m a.s.l. Stem positions of 500 trees were acquired by using a differential GPS (Zenith 35 Pro, GeoMax Widnau Switzerland) with a positioning accuracy of 0.05 m.\

Datasets

The LiDAR data was acquired with a Riegl LMS-Q780 sensor in early spring 2018 under leaf off conditions (Tab. @ref(tab:tabDatasetOverview)) and provided by the Hessian Agency for Nature Conservation, Environment and Geology - HLNUG. The original pointcloud had an average point density of XX points per square meter with a georeferencing accuracy of 0.3 m horizontally and 0.15 m vertically (Novatel OEM4 GNSS). The LiDAR pointcloud was already classified into ground points and non-ground points by the data provider.
The DAP pointclouds were acquired with a 3DR Solo Quadrocopter (3D Robotics, Inc., Berkeley CA, USA) and a GoPro Hero 7 camera (GoPro Inc., San Mateo CA, USA) between August and December of 2020. The flight tasks were planned in a uniform altitude above ground at 110 m resulting in a ground sampling distance of 5.6 cm, a side overlap of 75% and a front overlap of 90%. Using the photogrammetric software Metashape (Agisoft LLC, St. Petersburg, Russia), sparse pointclouds were computed from the individual images of each flight. Each sparseclouds was georeferenced with 10 ground control points surveyed with the Real Time Kinematic (RTK) GNSS (Global Navigation Satellite System) device Geomax Zenith 35 (GeoMax AG, Widnau, Switzerland). A detailed explanation of the applied photogrammetic workflow is describe in (Ludwig et al. 2020). After the referencing, dense pointclouds were computed for each flight from non-resampled depth maps and moderate filtering.

ID Type Sensor Acquisition Date Y-M-D Description
1 LiDAR Riegel LMS-Q780 2018-04-06 leaf off
2 DAP GoPro Hero7 2020-09-15 full canopy
3 DAP GoPro Hero7 2020-10-28 full canopy / beginning of coloring
4 DAP GoPro Hero7 2020-10-31 early leaf fall
5 DAP GoPro Hero7 2020-11-03 advanced leaf fall
6 DAP GoPro Hero7 2020-11-12 advanced leaf fall
7 DAP GoPro Hero7 2020-12-10 leaf off

Pointcloud preprocessing and combination

Forest structural analysis from pointclouds requires a canopy height model (CHM) i.e. the point height normalized by the terrain height from a digital elevation model (DEM). Since it is difficult (sometimes impossible) to identify ground points in vegetated areas in DAP pointclouds they are mostly not suitable for this task (Ota et al. 2015). Other sources of DEMs are therefore utilized e.g. by interpolation of surveyed ground points (Grüner, Astor, and Wachendorf 2019). Since terrain elevation does usually not change rapidly over time available LiDAR data is also viable most of the time (Ullah et al. 2019).
In order to increase the comparability between the pointclouds, each one was homogenized to a density of 50 points per square meter. The classified LiDAR ground points were used to compute a DEM with a spatial resolution of 1 m using a k-nearest neighbor algorithm with inverse distance weighting. The DEM was used to normalize the height of each pointcloud which effectively results in a canopy height model (CHM). To eliminate the effects of ground points in the upcoming index calculation, points below 2 m canopy height were removed.
DAP pointclouds from five different phenological stages during fall 2020 were combined with the leaf-off DAP pointcloud from 2020-12-11. These five multitemporal pointclouds were preprocessed the same way as the individual pointclouds (Fig. @ref(fig:figWorkflow)). All pointcloud based methods and computations were done in R using the lidR package (Roussel et al. 2020).

Calculation of LiDAR indices

Previous studies showed, that there are strong correlations between the most commonly used LiDAR indices (Shi et al. 2018). we picked 4 commonly used indices from (Bakx et al. 2019)

In the review by Bakx et al. (2019), over 10 different studies used this

Canopy height

The most common application of pointclouds is the estimation of the canopy height. The simplest method is the maximum absolute height of the points in each raster cell \(Z_{max}\). However a more realistic approach which is less prone to extreme values is the average height of all points in the 95 percentile (\(Z_{mean95}\), Eq. 1). Both indices are included in more than 10 studies listed in Bakx et al. (2019).

\[ Z_{mean95} = \frac{\Sigma(Z_{95})}{N_{95}} \;\;\; [1]\]

Canopy vertical structure

It is a common practice to regard only the first returns of the LiDAR pointclouds and treat them as the so called canopy surface model (CSM). DAP pointclouds do not have return values, however it is a reasonable assumption that every point represent the canopy, since only points which are visible from above the canopy are possible. Therefore the indices \(Z\_mean\_csm\) and \(Z\_sd\_csm\) were applied to the first returns of the LiDAR data but all points of the DAP pointclouds. To further assess the vertical strata of the canopy, we calculated 10 percentiles of the point heights in each raster cell.

Canopy heterogeneity

To assess the horizontal heterogeneity of the canopy, we used the aforementioned indices with a 1m resolution and calculated their standard deviation in a 10 x 10 m grid, resulting in a grid with 10m resolution.

Normalizing indices values

For better comparability between the different indices, the actual values got normalized to values between 0 and 1 by subtracting the minimum value and divide by the range of the index (Equation 3).

\[ Ind_{norm} = \frac{Ind - min(Ind)}{max(Ind) - min(Ind)} \;\;\;[3]\]

Results

Direct comparison of Lidar and DAP

The vertical point distribution of each pointcloud reveals a high point density between 20 and 35 m above ground where the forest canopy is located (Fig. @ref{fig:figVertpointsmono}). As expected, the LiDAR point density shows a spike at the ground level where over 20% of the points are located. However, this also leads to a lower relative point density in the canopy strata. The four DAP pointclouds where at least parts of the canopy is present (red in Fig. @ref{fig:figVertpointsmono}) show a very similar pattern in their vertical point distribution with a peak density at around 33 m and a smooth drop of in the lower parts of the canopy between 25 m and 15 m above ground. The leaf-off DAP pointcloud (2020_12_10) has a peak density below 30 m and consists of less points between 20 m and 5 m.

Comparison of the vertical point distribution of LiDAR and DAP pointclouds of different acquisition dates.

Comparison of Indices at different spatial resolutions

Comparing a DAP pointcloud from a flight at leaf-on conditions with the LiDAR pointcloud acquired under leaf-off conditions, revealed strong correlation coefficients for all canopy height indices. At a 1m spatial resolution, the lowest correlation of 0.79 for the average height of the CSM still indicates a strong comparability between the two datasets. In general the heights of the DAP pointclouds are slightly overestimated. Decreasing the spatial resolution for the index calculation removes most of the outliers, leading to nearly identical index values at a 10m resolution (Fig. @ref{figBakx1resolution}). The vertical distribution of canopy surface points (z_sd_csm) at high spatial resolutions is not comparable between the LiDAR and DAP pointcloud, however a correlation coefficient of 0.56 at a 10 m resolution indicate a weak comparability. Overall the standard deviation of the point height of the DAP pointcloud is lower than the one from LiDAR.

Correlation between indices based on LiDAR and DAP pointclouds. The DAP pointcloud was acquired under leaf-on canopy conditions. All indices were normalized for better comparability and scatterplot points were aggregated in hexagonal bins with a logarithmic color scale.

Comparison of indices for different DAP acquisiton dates

Across all flight dates, there is a strong comparability between canopy height related indices from LiDAR and DAP pointclouds. However, at later phenological stages of the deciduous forest, the correlation coefficient decreases (Fig. @{figBakx1dates01}). The latter acquired DAP pointclouds show more index values near 0, indicating that more points in the understory are present. Again, the standard deviation of the CSM height is not comparable between LiDAR and DAP. At a 10m resolution for the index calculation, the correlation between LiDAR and DAP pointclouds is independent of the DAP acquisition date (Fig. @ref{figBakx1dates10}).

Comparison of Indices at a 1m spatial resolution of a LiDAR pointcloud and DAP pointclouds acquired at different phenoloical stages. All indices were normalized for better comparability and scatterplot points were aggregated in hexagonal bins with a logarithmic color scale.

Comparison of Indices at a 10m spatial resolution of a LiDAR pointcloud and DAP pointclouds acquired at different phenoloical stages. All indices were normalized for better comparability and scatterplot points were aggregated in hexagonal bins with a logarithmic color scale.

Comparison of horizontal heterogeneity for different DAP acquisiton dates

The indices with a 1m resolution were the basis for the calculation of their standard deviation in a 10 x 10 m pixel. The horizontal canopy heterogeneity compares well especially between the leaf-on DAP pointclouds and LiDAR (Fig. @ref{figHorizontalsd}). The leaf-off DAP cloud (2020_12_10) again seems to underestimate point heights and subsequently their spatial heterogeneity.

Standard deviation of the indices in a 2m resolution aggeragted to a 10m resolution

Vertical structure

The pixel-wise calculation of percentile heights gives insights into the vertical structure of the canopy and which strata are comparable between LiDAR and DAP pointclouds. As expected and already indicated by the height based indices (Fig. @ref{figBakx1resolution}) the higher strata compare very well between the two data sources. At the higher spatial resolutions, the correlation coefficient between the pointclouds are consistently above 0.7 until the 40-percentile height (Fig. @ref{figZvoxelLeafon}). Increasing the spatial resolution to 10m, even lower parts of the canopy (down to the 20-percentile point height) are comparable.
Only the 10-percentile height of the DAP pointcloud is overestimated compared to LiDAR and the correlation coefficient does not exceed 0.6, since less points are present below the canopy.

Comparison of the percentile heights of points at different resolutions between LiDAR and DAP pointclouds. The DAP pointcloud was acquired at a fully developed canopy.

Discussion

The widespread popularity of UAS in remote sensing and ecology enabled the acquisition of novel datasets for their use in environmental monitoring. With temporally flexible and abundant surveys, not only potential applications increased but also new questions raised on when and how often it is necessary to carry out a flight campaign for a particular research interest or application. Novel dataformats or acquisition methods also need to be tested in terms of their contribution and their implementation into established methods and workflows. We adapted the concept of structural indices calculated from LiDAR pointclouds for forest structural analysis and applied it to DAP pointclouds. In addition, we explored the effects of forest phenology on these indices and their consistency over time.

Major differences between LiDAR and DAP pointclouds

The obvious difference between LiDAR and DAP pointclouds in forests is the lack of ground points in the latter. This is even the case for a DAP flight under leaf-off conditions. While most of the points in the LiDAR pointcloud are located at the ground level, the DAP pointclouds consists of much more points in the canopy region (Fig. @ref{figVertpointsmono}). For forest structural applications, this might not be problematic, since many studies which utilize LiDAR data exclude points below a certain height to neglect the effect of ground points (e.g. Carrasco et al. 2019; Froidevaux et al. 2016; J. Lee et al. 2018) or only use CHMs and its derivatives. DAP pointclouds might therefore be more suitable for canopy analysis than LiDAR since there are more points available in the regions of interest. The major drawback of the lack of lower points in DAP pointclouds is the computation of a DEM and subsequently the normalization of the pointcloud to a CSM. The terrain elevation has be derived from a different source (Ota et al. 2015; Ullah et al. 2019; Gruener2019?).

Influence of spatial resolution

Depending on the application, different spatial resolutions are more useful for the calculation of forest structural indices. CHMs are usually derived at resolutions of 1m or higher (e.g. Borgogno Mondino et al. 2020) and plot based studies use higher resolutions in order to aggregate indices on the plot size level (Ruiz et al. 2014; Carrasco et al. 2019; Froidevaux et al. 2016; J. Lee et al. 2018). For applications on a forest stand level or the integration of LiDAR into satellite data (e.g. Sentinel-2) coarser resolutions are sufficient (e.g. Li et al. 2020; Prins and Van Niekerk 2020). Across all tested spatial resolutions between 1 and 10m, a strong correlation between the height-based forest structural indices was found between LiDAR and DAP pointclouds. Especially at 10m resolution, LiDAR and DAP canopy heights are interchangeable. Not comparable on the other hand is the vertical variability of canopy surface points (z_sd_chm) which most likely stem from our assumption, that every point in the DAP pointcloud is located at the canopy surface.

Influence of forest phenology

The strongest correlations between structural indices were found between the leaf-off LiDAR data and the DAP pointcloud with a fully developed canopy. However, the comparability only slightly decreased in later phenological stages. Especially at coarser spatial resolutions, the effects of phenology are marginal. These results suggest that forest structural analysis with UAS can be conducted with high confidence at any phenological stages in the year. Researchers can therefore focus more on optimal flight conditions (e.g. consistent illumination and no wind) which heavily affect the quality of the DAP products (Dandois, Olano, and Ellis 2015). Literature on DAP in deciduous forests under leaf-off conditions is sparse. UAS based photogrammetry in forests is difficult in the first place due to repeating patterns in the canopy (Iglhaut et al. 2019), this might be even more the case in winter where no canopy structure is present. Future research should explore the effects of forest phenology on the photogrammetric quality.

Vertical strata

the bias of Lidar towards ground points is also apparent when comparing the quantile height in each pixel.

The lower parts of the crown can be identified independent of the exact flight date. This means, studies and applications can be conducted with much more flexibility and with more care about perfect flight conditions (illumination, wind, etc)

Bad correlations in lower voxel between LiDAR and UAS pointcloud might be because leaf off UAS is able to capture more branches and stems, therefore more points in lower part of the pointcloud (see Fig Distributions). UAS therefore might be better than LiDAR to capture understory vegetation (or at least the potential for understory vegetation) than leaf off LiDAR data

Outlook: posibility of individual tree segmentation and then relate to stand variable (Sackov 2019) or even tree health (Belmonte 2018)

Real strength lies in the combination of multiple data types, e.g. combining UAS photogrammetriy and LiDAR to monitor river channels (Flener et al. 2013)

mix of pointclouds and images to classify individual trees (Xu et al. 2020)

Outlook

EBV framework with 3D information: height, cover and structural complexity Heterogenous data sources: requires the comparability of Lidar and photogrammetrically recieved pointclouds (Valbuena et al. 2020)

The quality and viability of UAS pointclouds have to be assessed in terms of comparability to Lidar pointclouds (since Lidar structural analysis is the standard in many studies)

The main challange for further usage of Lidar data in a forest environment is the detection of individual trees. This enables the estimation of tree related parameters such as diameter at breast height, timber volume or crown related metrics (van Leeuwen and Nieuwenhuis 2010). Forest structure then can be described as the sum of the structure of individual trees (e.g. their height and biomass Ferraz et al. 2016) and the species composition. This could give new insights into ecosystem functioning, since many processes and species distributions depend on functions provided by trees or their related microhabitats. Further, monitoring of individual tree health and drought could be applied in forestry.

References

Alexander, Cici, Peder Klith Bøcher, Lars Arge, and Jens-Christian Svenning. 2014. “Regional-Scale Mapping of Tree Cover, Height and Main Phenological Tree Types Using Airborne Laser Scanning Data.” Remote Sensing of Environment 147 (May): 156–72. https://doi.org/10.1016/j.rse.2014.02.013.
Bakx, Tristan R. M., Zsófia Koma, Arie C. Seijmonsbergen, and W. Daniel Kissling. 2019. “Use and Categorization of Light Detection and Ranging Vegetation Metrics in Avian Diversity and Species Distribution Research.” Edited by Damaris Zurell. Diversity and Distributions 25 (7): 1045–59. https://doi.org/10.1111/ddi.12915.
Beland, Martin, Geoffrey Parker, Ben Sparrow, David Harding, Laura Chasmer, Stuart Phinn, Alexander Antonarakis, and Alan Strahler. 2019. “On Promoting the Use of Lidar Systems in Forest Ecosystem Research.” Forest Ecology and Management 450 (October): 117484. https://doi.org/10.1016/j.foreco.2019.117484.
Borgogno Mondino, Enrico, Vanina Fissore, Michael J. Falkowski, and Brian Palik. 2020. “How Far Can We Trust Forestry Estimates from Low-Density LiDAR Acquisitions? The Cutfoot Sioux Experimental Forest (MN, USA) Case Study.” International Journal of Remote Sensing 41 (12): 4551–69. https://doi.org/10.1080/01431161.2020.1723173.
Cao, Lin, Hao Liu, Xiaoyao Fu, Zhengnan Zhang, Xin Shen, and Honghua Ruan. 2019. “Comparison of UAV LiDAR and Digital Aerial Photogrammetry Point Clouds for Estimating Forest Structural Attributes in Subtropical Planted Forests.” Forests 10 (2): 145. https://doi.org/10.3390/f10020145.
Carrasco, Luis, Xingli Giam, Monica Papeş, and Kimberly Sheldon. 2019. “Metrics of Lidar-Derived 3d Vegetation Structure Reveal Contrasting Effects of Horizontal and Vertical Forest Heterogeneity on Bird Species Richness.” Remote Sensing 11 (7): 743. https://doi.org/10.3390/rs11070743.
Dandois, Jonathan, Marc Olano, and Erle Ellis. 2015. “Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure.” Remote Sensing 7 (10): 13895–920. https://doi.org/10.3390/rs71013895.
Davison, Sophie, Daniel N. M. Donoghue, and Nikolaos Galiatsatos. 2020. “The Effect of Leaf-on and Leaf-Off Forest Canopy Conditions on LiDAR Derived Estimations of Forest Structural Diversity.” International Journal of Applied Earth Observation and Geoinformation 92 (October): 102160. https://doi.org/10.1016/j.jag.2020.102160.
Duncanson, Laura, and Ralph Dubayah. 2018. “Monitoring Individual Tree-Based Change with Airborne Lidar.” Ecology and Evolution 8 (10): 5079–89. https://doi.org/10.1002/ece3.4075.
Fallati, Luca, Luca Saponari, Alessandra Savini, Fabio Marchese, Cesare Corselli, and Paolo Galli. 2020. “Multi-Temporal UAV Data and Object-Based Image Analysis (OBIA) for Estimation of Substrate Changes in a Post-Bleaching Scenario on a Maldivian Reef.” Remote Sensing 12 (13): 2093. https://doi.org/10.3390/rs12132093.
Fawcett, Dominic, Benjamin Azlan, Timothy C. Hill, Lip Khoon Kho, Jon Bennie, and Karen Anderson. 2019. “Unmanned Aerial Vehicle (UAV) Derived Structure-from-Motion Photogrammetry Point Clouds for Oil Palm ( Elaeis Guineensis ) Canopy Segmentation and Height Estimation.” International Journal of Remote Sensing 40 (19): 7538–60. https://doi.org/10.1080/01431161.2019.1591651.
Ferraz, António, Sassan Saatchi, Clément Mallet, and Victoria Meyer. 2016. “Lidar Detection of Individual Tree Size in Tropical Forests.” Remote Sensing of Environment 183 (September): 318–33. https://doi.org/10.1016/j.rse.2016.05.028.
Flener, Claude, Matti Vaaja, Anttoni Jaakkola, Anssi Krooks, Harri Kaartinen, Antero Kukko, Elina Kasvi, Hannu Hyyppä, Juha Hyyppä, and Petteri Alho. 2013. “Seamless Mapping of River Channels at High Resolution Using Mobile LiDAR and UAV-Photography.” Remote Sensing 5 (12): 6382–6407. https://doi.org/10.3390/rs5126382.
Froidevaux, Jérémy S. P., Florian Zellweger, Kurt Bollmann, Gareth Jones, and Martin K. Obrist. 2016. “From Field Surveys to LiDAR: Shining a Light on How Bats Respond to Forest Structure.” Remote Sensing of Environment 175 (March): 242–50. https://doi.org/10.1016/j.rse.2015.12.038.
Ganz, Selina, Yannek Käber, and Petra Adler. 2019. “Measuring Tree Height with Remote Sensing of Photogrammetric and LiDAR Data with Different Field Measurements.” Forests 10 (8): 694. https://doi.org/10.3390/f10080694.
Grüner, Esther, Thomas Astor, and Michael Wachendorf. 2019. “Biomass Prediction of Heterogeneous Temperate Grasslands Using an SfM Approach Based on UAV Imaging.” Agronomy 9 (2): 54. https://doi.org/10.3390/agronomy9020054.
Guerra-Hernández, Juan, Eduardo González-Ferreiro, Vicente Monleón, Sonia Faias, Margarida Tomé, and Ramón Díaz-Varela. 2017. “Use of Multi-Temporal UAV-Derived Imagery for Estimating Individual Tree Growth in Pinus Pinea Stands.” Forests 8 (8): 300. https://doi.org/10.3390/f8080300.
Hilmers, Torben, Nicolas Friess, Claus Bässler, Marco Heurich, Roland Brandl, Hans Pretzsch, Rupert Seidl, and Jörg Müller. 2018. “Biodiversity Along Temperate Forest Succession.” Edited by Nathalie Butt. Journal of Applied Ecology 55 (6): 2756–66. https://doi.org/10.1111/1365-2664.13238.
Iglhaut, Jakob, Carlos Cabo, Stefano Puliti, Livia Piermattei, James O’Connor, and Jacqueline Rosette. 2019. “Structure from Motion Photogrammetry in Forestry: A Review.” Current Forestry Reports 5 (3): 155–68. https://doi.org/10.1007/s40725-019-00094-3.
Jeronimo, Sean M A, Van R Kane, Derek J Churchill, Robert J McGaughey, and Jerry F Franklin. 2018. “Applying LiDAR Individual Tree Detection to Management of Structurally Diverse Forest Landscapes.” Journal of Forestry 116 (4): 336–46. https://doi.org/10.1093/jofore/fvy023.
Joyce, Karen E., Karen Anderson, and Renee E. Bartolo. 2021. “Of Course We Fly Unmanned’re Women!” Drones 5 (1): 21. https://doi.org/10.3390/drones5010021.
Kamoske, Aaron G., Kyla M. Dahlin, Scott C. Stark, and Shawn P. Serbin. 2019. “Leaf Area Density from Airborne LiDAR: Comparing Sensors and Resolutions in a Temperate Broadleaf Forest Ecosystem.” Forest Ecology and Management 433 (February): 364–75. https://doi.org/10.1016/j.foreco.2018.11.017.
Krause, Stuart, Tanja G. M. Sanders, Jan-Peter Mund, and Klaus Greve. 2019. UAV-Based Photogrammetric Tree Height Measurement for Intensive Forest Monitoring.” Remote Sensing 11 (7): 758. https://doi.org/10.3390/rs11070758.
Lee, Chung-Cheng, and Chi-Kuei Wang. 2018. “Estimating Stand Density in a Tropical Broadleaf Forest Using Airborne LiDAR Data.” Forests 9 (8): 475. https://doi.org/10.3390/f9080475.
Lee, Junghee, Jungho Im, Kyungmin Kim, and Lindi Quackenbush. 2018. “Machine Learning Approaches for Estimating Forest Stand Height Using Plot-Based Observations and Airborne LiDAR Data.” Forests 9 (5): 268. https://doi.org/10.3390/f9050268.
Li, Wang, Zheng Niu, Rong Shang, Yuchu Qin, Li Wang, and Hanyue Chen. 2020. “High-Resolution Mapping of Forest Canopy Height Using Machine Learning by Coupling ICESat-2 LiDAR with Sentinel-1, Sentinel-2 and Landsat-8 Data.” International Journal of Applied Earth Observation and Geoinformation 92 (October): 102163. https://doi.org/10.1016/j.jag.2020.102163.
Ludwig, Marvin, Christian M. Runge, Nicolas Friess, Tiziana L. Koch, Sebastian Richter, Simon Seyfried, Luise Wraase, et al. 2020. “Quality Assessment of Photogrammetric Methods for Reproducible UAS Orthomosaics.” Remote Sensing 12 (22): 3831. https://doi.org/10.3390/rs12223831.
Madsen, Bjarke, Urs A. Treier, András Zlinszky, Arko Lucieer, and Signe Normand. 2020. “Detecting Shrub Encroachment in Seminatural Grasslands Using UAS LiDAR.” Ecology and Evolution 10 (11): 4876–4902. https://doi.org/10.1002/ece3.6240.
Malambo, L., S. C. Popescu, S. C. Murray, E. Putman, N. A. Pugh, D. W. Horne, G. Richardson, et al. 2018. “Multitemporal Field-Based Plant Height Estimation Using 3d Point Clouds Generated from Small Unmanned Aerial Systems High-Resolution Imagery.” International Journal of Applied Earth Observation and Geoinformation 64 (February): 31–42. https://doi.org/10.1016/j.jag.2017.08.014.
Manfreda, Salvatore, Matthew F McCabe, Pauline E Miller, Richard Lucas, Victor Pajuelo Madrigal, Giorgos Mallinis, Eyal Ben Dor, et al. 2018. “On the Use of Unmanned Aerial Systems for Environmental Monitoring,” 28.
Melin, M., J. Matala, L. Mehtätalo, J. Pusenius, and P. Packalen. 2016. “Ecological Dimensions of Airborne Laser Scanning Analyzing the Role of Forest Structure in Moose Habitat Use Within a Year.” Remote Sensing of Environment 173 (February): 238–47. https://doi.org/10.1016/j.rse.2015.07.025.
Michez, Adrien, Leo Huylenbroeck, Corentin Bolyn, Nicolas Latte, Sébastien Bauwens, and Philippe Lejeune. 2020. “Can Regional Aerial Images from Orthophoto Surveys Produce High Quality Photogrammetric Canopy Height Model? A Single Tree Approach in Western Europe.” International Journal of Applied Earth Observation and Geoinformation 92 (October): 102190. https://doi.org/10.1016/j.jag.2020.102190.
Moeckel, Thomas, Supriya Dayananda, Rama Nidamanuri, Sunil Nautiyal, Nagaraju Hanumaiah, Andreas Buerkert, and Michael Wachendorf. 2018. “Estimation of Vegetable Crop Parameter by Multi-Temporal UAV-Borne Images.” Remote Sensing 10 (5): 805. https://doi.org/10.3390/rs10050805.
Montgomery, Joshua, Brian Brisco, Laura Chasmer, Kevin Devito, Danielle Cobbaert, and Chris Hopkinson. 2019. SAR and Lidar Temporal Data Fusion Approaches to Boreal Wetland Ecosystem Monitoring.” Remote Sensing 11 (2): 161. https://doi.org/10.3390/rs11020161.
Ota, Tetsuji, Miyuki Ogawa, Katsuto Shimizu, Tsuyoshi Kajisa, Nobuya Mizoue, Shigejiro Yoshida, Gen Takao, et al. 2015. “Aboveground Biomass Estimation Using Structure from Motion Approach with Aerial Photographs in a Seasonal Tropical Forest.” Forests 6 (12): 3882–98. https://doi.org/10.3390/f6113882.
Pellicani, Roberta, Ilenia Argentiero, Paola Manzari, Giuseppe Spilotro, Cosimo Marzo, Ruggero Ermini, and Ciro Apollonio. 2019. UAV and Airborne LiDAR Data for Interpreting Kinematic Evolution of Landslide Movements: The Case Study of the Montescaglioso Landslide (Southern Italy).” Geosciences 9 (6): 248. https://doi.org/10.3390/geosciences9060248.
Prins, Adriaan Jacobus, and Adriaan Van Niekerk. 2020. “Crop Type Mapping Using LiDAR, Sentinel-2 and Aerial Imagery with Machine Learning Algorithms.” Geo-Spatial Information Science, July, 1–13. https://doi.org/10.1080/10095020.2020.1782776.
Roussel, Jean-Romain, David Auty, Nicholas C. Coops, Piotr Tompalski, Tristan R. H. Goodbody, Andrew Sánchez Meador, Jean-François Bourdon, Florian de Boissieu, and Alexis Achim. 2020. lidR: An R Package for Analysis of Airborne Laser Scanning (ALS) Data.” Remote Sensing of Environment 251 (December): 112061. https://doi.org/10.1016/j.rse.2020.112061.
Ruiz, Luis, Txomin Hermosilla, Francisco Mauro, and Miguel Godino. 2014. “Analysis of the Influence of Plot Size and LiDAR Density on Forest Structure Attribute Estimates.” Forests 5 (5): 936–51. https://doi.org/10.3390/f5050936.
Sankey, Temuulen T., Jason McVay, Tyson L. Swetnam, Mitchel P. McClaran, Philip Heilman, and Mary Nichols. 2018. UAV Hyperspectral and Lidar Data and Their Fusion for Arid and Semi-Arid Land Vegetation Monitoring.” Edited by Nathalie Pettorelli and Ned Horning. Remote Sensing in Ecology and Conservation 4 (1): 20–33. https://doi.org/10.1002/rse2.44.
Shi, Yifang, Tiejun Wang, Andrew K. Skidmore, and Marco Heurich. 2018. “Important LiDAR Metrics for Discriminating Forest Tree Species in Central Europe.” ISPRS Journal of Photogrammetry and Remote Sensing 137 (March): 163–74. https://doi.org/10.1016/j.isprsjprs.2018.02.002.
Silva, Carlos A., Andrew T. Hudak, Lee A. Vierling, E. Louise Loudermilk, Joseph J. O’Brien, J. Kevin Hiers, Steve B. Jack, et al. 2016. “Imputation of Individual Longleaf Pine ( Pinus Palustris Mill.) Tree Attributes from Field and LiDAR Data.” Canadian Journal of Remote Sensing 42 (5): 554–73. https://doi.org/10.1080/07038992.2016.1196582.
Simonson, William D., Harriet D. Allen, and David A. Coomes. 2014. “Applications of Airborne Lidar for the Assessment of Animal Species Diversity.” Edited by Andrew Tatem. Methods in Ecology and Evolution 5 (8): 719–29. https://doi.org/10.1111/2041-210X.12219.
Ullah, Sami, Matthias Dees, Pawan Datta, Petra Adler, Mathias Schardt, and Barbara Koch. 2019. “Potential of Modern Photogrammetry Versus Airborne Laser Scanning for Estimating Forest Variables in a Mountain Environment.” Remote Sensing 11 (6): 661. https://doi.org/10.3390/rs11060661.
Valbuena, R., B. O’Connor, F. Zellweger, W. Simonson, P. Vihervaara, M. Maltamo, C. A. Silva, et al. 2020. “Standardizing Ecosystem Morphological Traits from 3d Information Sources.” Trends in Ecology & Evolution 35 (8): 656–67. https://doi.org/10.1016/j.tree.2020.03.006.
van Iersel, Wimala, Menno Straatsma, Hans Middelkoop, and Elisabeth Addink. 2018. “Multitemporal Classification of River Floodplain Vegetation Using Time Series of UAV Images.” Remote Sensing 10 (7): 1144. https://doi.org/10.3390/rs10071144.
van Leeuwen, Martin, and Maarten Nieuwenhuis. 2010. “Retrieval of Forest Structural Parameters Using LiDAR Remote Sensing.” European Journal of Forest Research 129 (4): 749–70. https://doi.org/10.1007/s10342-010-0381-4.
Wallace, Luke, Arko Lucieer, and Christopher S. Watson. 2014. “Evaluating Tree Detection and Segmentation Routines on Very High Resolution UAV LiDAR Data.” IEEE Transactions on Geoscience and Remote Sensing 52 (12): 7619–28. https://doi.org/10.1109/TGRS.2014.2315649.
Wallace, Luke, Arko Lucieer, Christopher Watson, and Darren Turner. 2012. “Development of a UAV-LiDAR System with Application to Forest Inventory.” Remote Sensing 4 (6): 1519–43. https://doi.org/10.3390/rs4061519.
Xu, Zhong, Xin Shen, Lin Cao, Nicholas C. Coops, Tristan R. H. Goodbody, Tai Zhong, Weidong Zhao, et al. 2020. “Tree Species Classification Using UAS-Based Digital Aerial Photogrammetry Point Clouds and Multispectral Imageries in Subtropical Natural Forests.” International Journal of Applied Earth Observation and Geoinformation 92 (October): 102173. https://doi.org/10.1016/j.jag.2020.102173.