Erin Farmer

and 11 more

Recent progress in proximal remote sensing has elevated both the spatial and temporal resolution of data acquisition, expanding the accessibility of these technologies for digital agriculture applications. These advanced sensors enable the gathering of extensive and novel datasets, proving instrumental in accurately characterizing phenotypes and parameterizing models for crop growth. Despite the distinctive structural, spatial, and spectral information embedded in these data streams, they have predominantly been utilized in isolation. Thus, this research aims to integrate these disparate data sources to improve estimations of agronomically important crop traits, such as yield. Deep learning methods, such as autoencoders, will be used to extract latent phenotypes, which will be used to characterize manually measured traits. We focus on multispectral images (MSIs) collected by unoccupied aerial vehicles and lidar scans collected by unoccupied ground vehicles. MSIs capture canopy-level spectral information, including the red, green, blue, red edge, and near infrared bands. Lidar scans are converted to point clouds to construct the three-dimensional sub-canopy architecture of maize plants. Data were collected on maize hybrids as part of the Genomes to Fields project, from 2018 to 2022, in Aurora, NY. Autoencoder model training on MSIs shows that latent phenotypes are effective image representations, containing relevant and sufficient information to generate image reconstructions. The latent codes are also predictive of the image date and normalized difference vegetation index values. Latent phenotypes were extracted from the lidar point clouds as well, and the prediction accuracies of models using these measurements separately and jointly will be compared.

Erin Farmer

and 11 more

Recent advancements in proximal remote sensing have increased the spatial and temporal resolution of data collection, as well as the availability of these technologies for applications to precision agriculture. These sensors have allowed the collection of new and large quantities of data, which have been used to successfully determine phenotypes and parametrize crop growth models. So far, these data streams have been mostly used separately, though they contain unique structural, spatial, and spectral information. Thus, this research aims to integrate these disparate data sources to improve estimations of agronomically important crop traits. In this study, we examine two high-throughput and relatively inexpensive remote platforms: unoccupied ground vehicles (UGV) and unoccupied aerial vehicles (UAV). Data were collected on maize hybrids from the Genomes to Fields initiative over 5 years, from 2018 to 2022, in Aurora, NY. We used ground rovers to collect lidar scans, which were converted to point clouds, to construct the three-dimensional sub-canopy architecture of maize plants. Multispectral sensors, covering red, green, blue, red-edge, and near infrared (NIR) were deployed on a UAV platform to characterize maize canopies. Machine learning methods, including autoencoders, will be used to extract latent phenotypes from the lidar point clouds and multispectral images. Ultimately, these will be used to predict manually measured traits, such as yield, in order to compare the prediction accuracies of models using these measurements separately and jointly.