Ioannis Athanasiadis bio photo

Ioannis Athanasiadis

Professor and Chair of Artificial Intelligence
Wageningen University & Research

University Page LinkedIn Google Scholar ORCID ACM DL DBLP Web of Science Twitter Email

Interpretability of deep learning models for crop yield forecasting

D. Paudel, A. de Wit, H. Boogaard, D. Marcos, S. Osinga, I.N. Athanasiadis

Abstract

Machine learning models for crop yield forecasting often rely on expert-designed features or predictors. The effectiveness and interpretability of these handcrafted features depends on the expertise of the people designing them. Neural networks have the ability to learn features directly from input data and train the feature learning and prediction steps simultaneously. In this paper, we evaluate the performance and interpretability of neural network models for crop yield forecasting using data from the MARS Crop Yield Forecasting System of the European Commission’s Joint Research Centre. The selected neural networks can handle sequential or time series data and include long short-term memory (LSTM) recurrent neural network and 1-dimensional convolutional neural network (1DCNN). Performance was compared with a linear trend model and a Gradient-Boosted Decision Trees (GBDT) model, trained using hand-designed features. Feature importance scores of input variables were computed using feature attribution methods and were analyzed by crop yield modeling and agronomy experts. Results showed that LSTM models perform statistically better than GBDT models for soft wheat in Germany and similar to GBDT models for all other case studies. In addition, LSTM models captured the effect of yield trend, static features (e.g. elevation, soil water holding capacity) and biomass features on crop yield well, but struggled to capture the impact of extreme temperature and moisture conditions. Our work shows the potential of deep learning to automatically learn features and produce reliable crop yield forecasts, and highlights the importance and challenges of involving human stakeholders in assessing model interpretability.

Download full text in pdf format

cover image Published as:
D. Paudel, A. de Wit, H. Boogaard, D. Marcos, S. Osinga, I.N. Athanasiadis, Interpretability of deep learning models for crop yield forecasting, Computers and Electronics in Agriculture, 206:107663, 2023, doi:10.1016/j.compag.2023.107663.


Highlights

  • Deep learning models learn plausible relationships between features and crop yield
  • Neural networks learn features that perform similar to expert-designed features
  • Feature influence on yield matches expert knowledge when model accuracy is high

You might also enjoy (View all publications)