Recent advancements in proximal remote sensing have increased the spatial and temporal resolution of data collection, as well as the availability of these technologies for applications to precision agriculture. These sensors have allowed the collection of new and large quantities of data, which have been used to successfully determine phenotypes and parametrize crop growth models. So far, these data streams have been mostly used separately, though they contain unique structural, spatial, and spectral information. Thus, this research aims to integrate these disparate data sources to improve estimations of agronomically important crop traits. In this study, we examine two high-throughput and relatively inexpensive remote platforms: unoccupied ground vehicles (UGV) and unoccupied aerial vehicles (UAV). Data were collected on maize hybrids from the Genomes to Fields initiative over 5 years, from 2018 to 2022, in Aurora, NY. We used ground rovers to collect lidar scans, which were converted to point clouds, to construct the three-dimensional sub-canopy architecture of maize plants. Multispectral sensors, covering red, green, blue, red-edge, and near infrared (NIR) were deployed on a UAV platform to characterize maize canopies. Machine learning methods, including autoencoders, will be used to extract latent phenotypes from the lidar point clouds and multispectral images. Ultimately, these will be used to predict manually measured traits, such as yield, in order to compare the prediction accuracies of models using these measurements separately and jointly.