Evaluation and improvement of Type III resistance (lower mycotoxin accumulation) is an integral part in developing wheat varieties with resistance to Fusarium Head Blight. Therefore, application of novel tools is necessary to increase selection accuracy and intensity. Here, we explored the application of phenomic prediction using hyperspectral imaging in predicting Deoxynivalenol (DON) content in soft winter wheat kernels. In all Bayesian prediction models used, phenomic prediction recorded higher accuracy (0.63-0.67) than genomic prediction (0.55-0.60). Following this, we proceeded to use the trained prediction models: Bayes C, Bayesian Ridge Regression, and Bayesian LASSO in a testing set of F4:5 breeding lines. Selection was carried out using Unsupervised K-Means Clustering. A large proportion of F4:5 breeding lines predicted to have low DON content were also observed to have low GC/MS-derived DON content. The results of this study revealed the potential application of hyperspectral imaging in predicting Deoxynivalenol accumulation in soft winter wheat kernels with increased selection intensity.
Dry bean (Phaseolus vulgaris L.) is the third largest pulse crop grown in Canada. Due to climate change and extreme weather, dry bean varieties are subjected to abiotic and biotic stresses, which affect yield stability and seed quality. Development of resilient cultivars is the most effective strategy to ensure productivity and environmental sustainability of dry bean crop. In this project, key phenotypic traits will be extracted for genetic improvement and development of elite cultivars with early maturity and high yield. Traditional phenotyping approaches are rigorous, time-consuming, and subject to human errors. Unmanned aerial vehicle (UAV)-based high-throughput phenotyping (HTP) has been changing the way of doing large-scale phenotyping in plant breeding. The use of aerial imaging systems offers a potential solution to provide an intensive tool for complex traits assessment to evaluate a large number of dry bean genotypes. By this, HTP technique will be optimized to improve selection efficiency of agronomic, physiological and disease resistance traits. In this study, two dry bean field trials, Advanced Yield Trial (AYT) consisting of F7 generation [yellow bean (5 entries), Pinto bean (20 entries)], and Performance Yield Trial (PeYT) of F8-F10 generation (49 entries) were grown in a randomized-block design at the Fairfield Research Farm at AAFC Lethbridge, AB. Both field trials were imaged at the specific developmental stages (vegetative, flowering, maturity) using UAV mounted RGB and multispectral sensors. The acquired imagery have been processed to accurately overlay images from different dates (time-series data comparison). We analyzed three-time point RGB and multispectral images to identify valuable traits such as canopy height, crop lodging, physiological maturity and accumulation of crop biomass over time. With the preliminary results, we found the utilization of UAV-based HTP has significant advantage in non-destructive measurements of canopy-level functional traits. Assessment of these traits at same climatic region can be used to identify crop characteristics that are important for screening of high-quality dry bean experimental lines and cultivars in field conditions. In the long term, it will provide a consistent and reliable information system to rapidly screen thousands of breeding populations individually that need to be genotyped for morphological and physiological functional traits.
Digital imaging technology has gained significant interest in recent decades, particularly in the field of high-throughput phenotyping (HTP) for plant breeding. Breeding programs generates thousands of new crop lines that require evaluation under multiple environments. Considerable efforts have been made in utilizing genome wide association studies (GWAS) and genomic selection (GS) to identify genetic markers and improve desirable crop characteristics. Selecting key phenotypes is an essential component of plant breeding, and traditional methods require considerable resources and are subjective. Therefore, breeders and geneticists are in an urge of a robust technology to identify desirable crop traits. HTP using advanced sensors is a promising approach to evaluate improved crop genotypes for traits of agronomic importance. In this project, six Research and development Centers (RDCs) of Agriculture and Agri-food Canada have been utilizing University of Saskatchewan built Field Phenotyping System ("UFPS Cart") to phenotype a heritage bread wheat panel. The UFPS cart is a proximal sensing mobile platform equipped with multiple payloads (RTK GPS, RGB, NIR, and LiDAR sensor). For diverse climatic data collection, the panel consisting of 30 Canadian western spring wheat varieties were grown under six environments. This study aims to develop large-scale data management and image analysis pipelines to quantify different crop growth characteristics representing agronomic and physiological traits. It support data-driven decision making under genotype × environment effect. The multi-location imagery and ground observation data from six environments are currently being processed using the internal General Public Science Cluster (GPSC) for deep learning training to develop prediction models and extract phenotypic traits of interest (canopy height, crop lodging, heading, maturity, grain yield and protein content). The developed tools and associated models will aid to accelerate advances in cereal breeding programs.
ORCiD: [https://orcid.org/0000-0003-0655-2343] Keywords: Root imaging, root-system architecture (RSA), soybean, 2D-phenotyping & 3D-phenotyping Roots are a major part of plant systems and are essential to obtaining water and nutrients. Despite their importance, roots have not been extensively examined as compared to their aboveground counterparts, due primarily to the difficulties of access and lack of standard methods to quantify root morphology. While there have been several experiments performed under controlled environments, comparatively fewer studies have examined root architectures under field conditions. Here, we apply two imaging techniques to characterize variability in Root System Architecture (RSA) in diverse soybean genotypes under field settings with two contrasting soil conditions. Thus, our objectives are to (1) quantify root system architecture using 2D image techniques (e.g., Winrhizo and Image J) and (2) evaluate a contrasting subset of these samples (n = 30) using a novel 3D phenotyping approach. The research seeks to meet the need for enhanced methods in root system architecture analysis across diverse field conditions potentially leading to more resilient, high-yielding soybean varieties.
With increasing demands for sustainable food production, expediting innovation within the development of agricultural products is paramount for Bayer and similar companies. We used a novel gene-editing protocol that generates numerous events to increase our gene-editing capacity. TREDMIL was used create 800 distinct edits in soybeans at three Dt1 gRNA targets in over 1500 events sites distributed across 100 soy lines. With the ability to produce great numbers of edit events, our phenotypic testing had to evolve to keep pace. Using a hypothesis-based approach, we have refined phenotypic testing to measure relevant plant traits that impact yield. Our in-field phenotyping of the target set of plant traits feeds a machine learning model that adjusts small plot yield. Testing in both corn and soy has demonstrated small plots are predictive of large-scale yield testing (84% agreement) and that modeling with additional traits improved the predictive capacity to 93%.
Non-destructive, real-time monitoring of root development can be helpful to farmers in improving crop resilience while minimizing resource use (Mervin et al., 2022). However, it is still an unexplored frontier in understanding root responses efficiently. In this study, we employed three in-soil fiber Bragg grating (FBG) based fiber sensors to generate root phenotyping data and developed an automated method using the deep learning architecture ResNet to monitor underground root development. In our preliminary study, we conducted a simulation experiment using two metal rods with diameters of 1mm and 5mm to mimic plant's roots. These rods were inserted to a depth of 15 cm in two different scenarios, 6 and 11 minutes, with the three in-soil FBG sensors continuously collecting data-two FBGs placed on the sides, and one placed at the bottom. The sensor data was preprocessed, resulting in 3228 samples for root diameter and 477 for root depth prediction models. We used an 80/20 split for training and testing the ResNet models to predict the artificial root diameter and ten different depth levels. The achieved accuracy was 0.95 for depth and 0.91 for diameter prediction. Overall, our study demonstrates the potential of ResNet architectures to accurately predict root depth and diameter with fiber optics-based sensors. Therefore, non-destructive root phenotyping in agricultural applications might be possible. Future work will involve evaluating these models in field experiments to assess their real-world performance.
Agriculture utilizes large quantities of freshwater resources to maintain crop production to a level that meets global demand. As the world population expands at a rapid rate, it is critical that we find more efficient ways to manage freshwater resources. The Plant DiTech Phenotyping Platform (Plant-DiTech LTD, Yavne, Israel) is a dynamic plant screening system that allows researchers to create controlled environments and rapidly collect a large quantity of physiological traits in real time. This platform opens the door to hundreds of potential experiments to explore growth trait responses to future environmental conditions and irrigation practices. Here, we test its utility on cultivated rice and upland cotton in a small pilot experiment. Cultivated rice (Oryza sativa ) and upland cotton (Gossypium hirsutim ) are both C3 crop species that are grown throughout the U.S. and many other countries and are critical for food and fiber production, respectively. Cotton is typically produced in the warm and dry regions of southern U.S. (USDA-ERS) where it heavily relies on irrigation practices to remain productive, while rice requires flooded conditions to maintain high productivity. Using these two test species for a preliminary experiment, a series of growth trait measurements in conjunction with system-recorded data by the Plant DiTech phenotyping platform are analyzed to test the linkages between organ- and canopy-level traits.
In recent years, the use of field-based high-throughput phenotyping (FHTP) has surged across diverse disciplines. Particularly, it has gained significant traction in agricultural research, enabling scientists to efficiently gather extensive data for a deeper understanding of plant biology in the context of plant growth dynamics. This abstract aims to demonstrate potential applications of data obtained through high-throughput phenotyping in the fields of plant biology and predictive plant breeding.The study utilized temporal phenotype data derived from repetitive drone flights equipped with various sensors. These data were incorporated into a novel mixed model, providing insights into the temporal genetic effects on different genotypes/plants. Gaussian or Lorentzian peak models, as well as Functional Principal Component analysis, were employed to characterize the growth patterns of various genotypes in diverse environments. The research revealed that Temporal Effect Sizes of Quantitative Trait Loci (QTLs) influence growth differently across time points, highlighting the dynamic nature of plant development. Furthermore, the study uncovered time-dependent associations between genotypes and their environments based on temporal phenotype values.The predictive capability of temporal phenomic data was found to surpass that of genomic data in predicting complex traits in maize. However, the combination of phenomic and genomic data consistently yielded the most accurate predictions for complex traits. By analyzing drone flights at specific growth stages, the study quantified physiological traits such as senescence progression across multiple time points. This analysis led to the calculation of new traits, including days to senescence and grain filing period, providing valuable insights into plant development and growth dynamics.
Wheat is an important primary crop that nourishes billions of people worldwide. Wheat diseases, particularly Fusarium head blight (FHB) disease, often have a severe effect on wheat yield in terms of both quantity and quality posing potential threats to the health of humans and livestock. Traditional methods such as field surveys for monitoring and assessing wheat diseases are time-consuming, costly and inefficient. In recent years, remote sensing approaches, particularly aerial imaging using Unmanned Aerial Vehicles (UAV), have become invaluable tools for rapid field scouting at larger scale, as well of crop growth and health status. This study aims to investigate the potential of combining high-resolution UAV multispectral imagery with machine learning (ML) methods for the estimation of FHB disease severity. Two experimental wheat fields were established at Volga, South Dakota, USA, in 2022. The severity of FHB disease was assessed and rated in the fields; and synchronous UAV flights were conducted to collect multispectral imagery. Canopy spectral and texture features were derived from the UAV multispectral imagery and used as input variables for ML models to predict FHB disease severity levels. Both classification and regression approaches were applied to estimation FHB severity using ML models such as Random Forest, Support Vector Machine, and Deep Neural Networks. The results show that both canopy spectral and texture features are important indicators for monitoring the severity of FHB wheat disease. Furthermore, the use of UAV remote sensing, combined with ML-based modeling, is a sustainable approach for rapid and accurate detection of wheat FHB disease severity.
Unoccupied / Unmanned / Uncrewed Aerial Systems (UAS, also known as drones) are tools that can provide field-based phenotyping and phenomics derived insights into plant breeding, biology, genetics and agronomy. There are many important, yet disparate agricultural UAS activities, occurring in silos which with better communication across research, education, and extension, could create transformative change for all stakeholders. The goal of this USDA-NIFA and AG2PI supported project is to advance knowledge and activities through promoting UAS data collection, processing, analysis, and community discussions. The objectives include: 1) Encourage collaboration pertaining to best practices between university, industry, and personal stakeholders who are currently developing and using UAS tools, ranging from beginner to experts. 2) Process Genomes to Fields (G2F) datasets (2017 to 2023, and up to eleven locations with ~6TB of data) into widely accessible and usable end products the community can directly use. And 3) Institute a user-centered webpage so that contributors and interested parties can access information about UAS based HTP conveniently. To accomplish the goals of the project, a UAS Project Coordinator will work to identify existing use of UAS in agriculture user groups, listen to seasoned advice and newcomer needs, work as a liaison to conduct the sharing of wisdom, and summarize their knowledge and share personal connections across discipline, institutions, and species; making this information easier to access in the future. The project is working to ease the learning process of beginners using UAS tools, as well as sharing the advancements of experienced users on the forefront of innovation and discovery.
Phosphorus (P) is a vital macronutrient for building up essential biomolecules in plants, and its accurate quantification can guide effective crop management and increase crop growers’ profit. Traditional chemical reaction-based methods for measuring P levels in plants are destructive and complex. Hyperspectral imaging offers a real-time, non-destructive avenue for assessing crop nutrient status. While these images are rich in both spatial and spectral information, limitations in current devices and analytical algorithms have led most studies to concentrate solely on the spectral features. In this study, a novel algorithm to combine features in spatial and spectral domains is proposed and implemented to differentiate phosphorus deficiency symptoms in corn plants. At the V6 vegetative stage, leaf-level hyperspectral images from three P levels and two leaf positions were collected using the handheld proximal hyperspectral imager, LeafSpec. Spatial and spectral features that exhibit significant differences between the P treatments were generated by integrating pre-designed spatial partitions with spectral index maps. The correlation coefficient between the P content and elected spatial and spectral features was treated as the standard to further refine the mining results. The spatial and spectral joint effects showed superior ability than spectral indices in differentiating P deficiency at both leaf positions, especially for medium P and sufficient P. Related visualization maps also gave out a preliminary insight into the differences in P deficiency symptoms. This study highlights the great potential and effectiveness of combining spatial and spectral features in differentiating P levels at corn’s early vegetative stage.
Phenotyping plays a crucial role in parameterizing, calibrating, and evaluating process-based plant models, which can be used to understand and predict crop behavior under various conditions. This, in turn, allows for more informed decision-making in agriculture and provides insights into how crops may respond to changing environmental factors. Nevertheless, several potential disparities between current plant models and phenotyping methods exist, which have the potential to undermine the precision and applicability of these models. These discrepancies encompass differences in data resolution and scale (both spatial and temporal) between what is feasibly gathered during phenotyping efforts and what is required by the models. Additionally, issues related to compatibility between the traits measured during phenotyping campaigns and the parameters required for models are prevalent. Furthermore, the representation of genetic and environmental variability in both models and phenotyping data is often limited, resulting in a gap between these two components.To address these discrepancies, we are developing a new “minimum” plant model. This model has been designed with strong consideration of what data can be measured in phenotyping campaigns across multiple genotypes. While the core model remains straightforward, it has the capability to simulate essential processes to represent fundamental plant physiology. Moreover, it has the potential to be integrated into larger, more comprehensive models or parameterized to suit specific genotypes. This approach aims to bridge the gap between phenotyping and crop models, thus enhancing their effectiveness in addressing the challenges that emerge by utilizing them in broader agricultural applications.
Deep learning is a central tool in plant phenotyping. Proficiency in artificial intelligence and programming remains essential for designing a deep learning model. The reuse of deep models can be challenging for non-coding end-user such as plant biologists. Currently, very few tools facilitate collaboration between computational infrastructure, non-coding end-users, and deep learning project manager. Consequently, this limits the deployment of multi centric large scale deep learning initiatives (such as the notable exception of the Globally Wheat Challenge) in plant phenotyping. We propose the scheme of Fig. 1 to allow non-coding end-users to test existing deep learning models with a homemade software capable of running any segmentation, classification or object detection deep learning model. This software is an open source plugin coined MANINI (https://github.com/hereariim/manini) which runs under the north American initiative NAPARI (https://napari.org/). The plugin also enables the manual correction of the inference. The corrected inferences can then be used to retrain or fine-tune the model on a large-scale infrastructure (in our case the European Grid Inrastructure EGI) via the DEEPaaS API. The resulting new model is then accessible on a public repository. We illustrate the interest of this scheme on various plant phenotyping use cases.
Comparing the genomes of two species offers a robust approach to unveil potentially overlooked genes that wield substantial influence on observed agronomic traits. Extensive phenotypic data from maize and sorghum were previously utilized in genome-wide association studies. This project aims to harness the potential of comparative genomics to strengthen confidence in these marker-trait associations and to suggest previously unexplored relationships. To achieve this goal, insights from studies on the genetically diverse Sorghum Association Panel and the maize Wisconsin Diversity Panel were leveraged to classify candidate genes as either shared between the two species or unique to one or the other. Candidate orthologous genes found in both species, associated with shared phenotypes, enhance the reliability of the associations within each species. Additionally, genes unique to each species provide parameters to inform future predictive models. Finally, given the ancient tetraploidy of maize and the biased loss of genes over time, the candidate genes identified in sorghum provide valuable information for understanding the orthologs found in the maize subgenomes.
Enhancing photosynthesis for increased sorghum grain yield has become a key focus in sorghum breeding efforts. Phenotyping, involving the measurement of various morpho-physiological and physical traits associated with photosynthesis and grain yield, is a time-intensive process. However, the potential of non-invasive leaf-level hyperspectral imaging to swiftly detect plant performance, optimizing photosynthesis and grain yield, is promising. This study aimed to evaluate the feasibility of utilizing hyperspectral reflectance in the 350–950 nm range for the rapid estimation of these traits in intact sorghum leaves. Multiple machine learning regression algorithms were developed using leaf-level hyperspectral reflectance data from nearly 400 sorghum accessions within an association panel. The best-performing prediction models were then considered as potential methods for constructing a prediction model targeting multiple other physiological and yield traits in sorghum accessions. The results indicate that this approach enables the early detection of leaf photosynthetic and yield traits through leaf-level hyperspectral reflectance without the need for a full-range, high-cost leaf spectrometer.
Plot extraction for field trials plays a foundational role in supporting various research and development related to agricultural applications, including the classification of crop types, estimation of crop yields, and monitoring the health of crops. Traditional methods employed to define these plots have a trade-off between the substantial human labor involved and the precision of the delineated plots. In our research, we introduce a semi-automatic framework for plot extraction that requires only two essential inputs: the width and height of the plot. Our framework leverages the Segment Anything Model (SAM) for image segmentation, producing masks that are subsequently converted into polygons. These generated polygons are then filtered based on the user input. We refine the positions and orientations of these polygons by maximizing their overlap with the actual plot field. Experiment results were evaluated by comparing the extracted boundaries with manually digitized ground truth data. The results of our study demonstrate the successful extraction of individual plots across various fields characterized by diverse crop types. It is our expectation that this framework will significantly reduce the manual labor required for precise plot extraction, thus enhancing the ease and efficiency of this critical prerequisite task.
Source-Sink Regulated Senescence (SSRS) in maize is a complex trait involving sugar sensing and carbohydrate partitioning. While the yield effects of SSRS are still under investigation, mapping work in Midwest commercial germplasm has found evidence for QTL on several chromosomes. In this study, we seek to understand the variation of this trait in global maize germplasm and map the genetic loci linked to this senescence response using the Maize Nested Association Mapping population. Wide variation was seen in expression of the senescence phenotype including several lines that are potentially anti-senescent in response to sink removal. Phenotypes of NAM RIL populations support the possibility of anti-senescent or senescence suppressing genetic factors. Joint linkage mapping identified QTL for the SSRS trait on three chromosomes which are distinct from those previously reported. By understanding and mapping the global diversity of this trait, we can better understand the physiology of maize senescence and integrate insights into breeding programs.
Recent progress in proximal remote sensing has elevated both the spatial and temporal resolution of data acquisition, expanding the accessibility of these technologies for digital agriculture applications. These advanced sensors enable the gathering of extensive and novel datasets, proving instrumental in accurately characterizing phenotypes and parameterizing models for crop growth. Despite the distinctive structural, spatial, and spectral information embedded in these data streams, they have predominantly been utilized in isolation. Thus, this research aims to integrate these disparate data sources to improve estimations of agronomically important crop traits, such as yield. Deep learning methods, such as autoencoders, will be used to extract latent phenotypes, which will be used to characterize manually measured traits. We focus on multispectral images (MSIs) collected by unoccupied aerial vehicles and lidar scans collected by unoccupied ground vehicles. MSIs capture canopy-level spectral information, including the red, green, blue, red edge, and near infrared bands. Lidar scans are converted to point clouds to construct the three-dimensional sub-canopy architecture of maize plants. Data were collected on maize hybrids as part of the Genomes to Fields project, from 2018 to 2022, in Aurora, NY. Autoencoder model training on MSIs shows that latent phenotypes are effective image representations, containing relevant and sufficient information to generate image reconstructions. The latent codes are also predictive of the image date and normalized difference vegetation index values. Latent phenotypes were extracted from the lidar point clouds as well, and the prediction accuracies of models using these measurements separately and jointly will be compared.