in

Automated spectroscopic modelling with optimised convolutional neural networks

  • 1.

    Ben-Dor, E., Irons, J. & Epema, G. Soil reflectance. Remote Sens. Earth Sci. Man. Remote Sens. 3, 111–188 (1999).

    Google Scholar 

  • 2.

    Soriano-Disla, J. M., Janik, L. J., Viscarra Rossel, R. A., Macdonald, L. M. & McLaughlin, M. J. The performance of visible, near-, and mid-infrared reflectance spectroscopy for prediction of soil physical, chemical, and biological properties. Appl. Spectrosc. Rev. 49, 139–186 (2014).

    ADS  CAS  Article  Google Scholar 

  • 3.

    Viscarra Rossel, R. A. et al. A global spectral library to characterize the world’s soil. Earth-Sci. Rev. 155, 198–230 (2016).

    Article  Google Scholar 

  • 4.

    Orgiazzi, A., Ballabio, C., Panagos, P., Jones, A. & Fernández-Ugalde, O. Lucas soil, the largest expandable soil dataset for europe: a review. Eur. J. Soil Sci. 69, 140–153 (2018).

    Article  Google Scholar 

  • 5.

    Viscarra Rossel, R. A. & Webster, R. Predicting soil properties from the australian soil visible-near infrared spectroscopic database. Eur. J. Soil Sci. 63, 848–860 (2012).

    Article  Google Scholar 

  • 6.

    Shi, Z., Ji, W., Viscarra Rossel, R. A., Chen, S. & Zhou, Y. Prediction of soil organic matter using a spatially constrained local partial least squares regression and the c hinese vis-nir spectral library. Eur. J. Soil Sci. 66, 679–687 (2015).

    CAS  Article  Google Scholar 

  • 7.

    Wijewardane, N. K., Ge, Y., Wills, S. & Loecke, T. Prediction of soil carbon in the conterminous united states: visible and near infrared reflectance spectroscopy analysis of the rapid carbon assessment project. Soil Sci. Soc. Am. J. 80, 973–982 (2016).

    ADS  CAS  Article  Google Scholar 

  • 8.

    Peng, Y. et al. Predicting soil organic carbon at field scale using a national soil spectral library. J. Near Infrared Spectrosc. 21, 213–222 (2013).

    ADS  CAS  Article  Google Scholar 

  • 9.

    Terra, F. . S. ., Demattê, J. . A. . & Viscarra Rossel, R. . A. . Spectral libraries for quantitative analyses of tropical brazilian soils: comparing vis-nir and mid-ir reflectance data. Geoderma 255, 81–93 (2015).

    ADS  Article  Google Scholar 

  • 10.

    Clairotte, M. et al. National calibration of soil organic carbon concentration using diffuse infrared reflectance spectroscopy. Geoderma 276, 41–52 (2016).

    ADS  CAS  Article  Google Scholar 

  • 11.

    Tziolas, N., Tsakiridis, N., Ben-Dor, E., Theocharis, J. & Zalidis, G. A memory-based learning approach utilizing combined spectral sources and geographical proximity for improved vis-nir-swir soil properties estimation. Geoderma 340, 11–24 (2019).

    ADS  CAS  Article  Google Scholar 

  • 12.

    Wold, Svante, Harold Martens, and Herman Wold. The multivariate calibration problem in chemistry solved by the pls method. Matrix Pencils. Springer. 286–293 (1983).

  • 13.

    Viscarra Rossel, R. A. & Behrens, T. Using data mining to model and interpret soil diffuse reflectance spectra. Geoderma 158, 46–54 (2010).

    ADS  Article  Google Scholar 

  • 14.

    Lee, S., Choi, H., Cha, K. & Chung, H. Random forest as a potential multivariate method for near-infrared (nir) spectroscopic analysis of complex mixture samples: Gasoline and naphtha. Microchem. J. 110, 739–748 (2013).

    CAS  Article  Google Scholar 

  • 15.

    Devos, O., Ruckebusch, C., Durand, A., Duponchel, L. & Huvenne, J.-P. Support vector machines (svm) in near infrared (nir) spectroscopy: focus on parameters optimization and model interpretation. Chemom. Intell. Lab. Syst. 96, 27–33 (2009).

    CAS  Article  Google Scholar 

  • 16.

    Daniel, K., Tripathi, N. & Honda, K. Artificial neural network analysis of laboratory and in situ spectra for the estimation of macronutrients in soils of lop buri (thailand). Soil Res. 41, 47–59 (2003).

    CAS  Article  Google Scholar 

  • 17.

    Rawat, W. & Wang, Z. Deep convolutional neural networks for image classification: a comprehensive review. Neural Comput. 29, 2352–2449 (2017).

    MathSciNet  Article  Google Scholar 

  • 18.

    Ji, S., Xu, W., Yang, M. & Yu, K. 3d convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35, 221–231 (2012).

    Article  Google Scholar 

  • 19.

    Wallach, I., Dzamba, M. & Heifets, A. Atomnet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-Based Drug Discovery. arXiv:1510.02855 (2015).

  • 20.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    ADS  CAS  Article  Google Scholar 

  • 21.

    Veres, M., Lacey, G. & Taylor, G. W. Deep learning architectures for soil property prediction. In 2015 12th Conference on Computer and Robot Vision, 8–15 (IEEE, 2015).

  • 22.

    Liu, L., Ji, M. & Buchroithner, M. Transfer learning for soil spectroscopy based on convolutional neural networks and its application in soil clay content mapping using hyperspectral imagery. Sensors 18, 3169 (2018).

    Article  Google Scholar 

  • 23.

    Tsakiridis, N. L., Keramaris, K. D., Theocharis, J. B. & Zalidis, G. C. Simultaneous prediction of soil properties from vnir-swir spectra using a localized multi-channel 1-d convolutional neural network. Geoderma 367, 114208 (2020).

    ADS  CAS  Article  Google Scholar 

  • 24.

    Padarian, J., Minasny, B. & McBratney, A. Using deep learning to predict soil properties from regional spectral data. Geoderma Reg. 16, e00198 (2019).

    Article  Google Scholar 

  • 25.

    Padarian, J., Minasny, B. & McBratney, A. Transfer learning to localise a continental soil vis-nir calibration model. Geoderma 340, 279–288 (2019).

    ADS  CAS  Article  Google Scholar 

  • 26.

    Ng, W. et al. Convolutional neural network for simultaneous prediction of several soil properties using visible/near-infrared, mid-infrared, and their combined spectra. Geoderma 352, 251–267 (2019).

    ADS  CAS  Article  Google Scholar 

  • 27.

    Hutter, F., Kotthoff, L. & Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges (Springer, Berlin, 2019).

    Google Scholar 

  • 28.

    Ioffe, S. & Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 (2015).

  • 29.

    Santurkar, S., Tsipras, D., Ilyas, A. & Madry, A. How does batch normalization help optimization? In Advances in Neural Information Processing Systems 31 (eds Bengio, S. et al.) 2483–2493 (Curran Associates, Inc., Red Hook, 2018).

    Google Scholar 

  • 30.

    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014).

    MathSciNet  MATH  Google Scholar 

  • 31.

    Hahnloser, R. H., Sarpeshkar, R., Mahowald, M. A., Douglas, R. J. & Seung, H. S. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405, 947–951 (2000).

    ADS  CAS  Article  Google Scholar 

  • 32.

    Nair, V. & Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In ICML (2010).

  • 33.

    Maas, A. L., Hannun, A. Y. & Ng, A. Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of ICML 30, 3 (2013).

  • 34.

    Clevert, D.-A., Unterthiner, T. & Hochreiter, S. Fast and Accurate Deep Network Learning by Exponential Linear Units (elus). arXiv:1511.07289 (2015).

  • 35.

    Klambauer, G., Unterthiner, T., Mayr, A. & Hochreiter, S. Self-normalizing neural networks. Adv. Neural Inf. Process. Syst. 30, 971–980 (2017).

    Google Scholar 

  • 36.

    Ramachandran, P., Zoph, B. & Le, Q. V. Searching for Activation Functions. arXiv:1710.05941 (2017).

  • 37.

    Ruder, S. An Overview of Gradient Descent Optimization Algorithms. arXiv:1609.04747 (2016).

  • 38.

    Simonyan, K. & Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556 (2014).

  • 39.

    Pascanu, R., Mikolov, T. & Bengio, Y. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning 1310–1318 (2013).

  • 40.

    Mnih, V. et al. Playing Atari with Deep Reinforcement Learning. arXiv:1312.5602 (2013).

  • 41.

    Duchi, J., Hazan, E. & Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12.7 (2011).

  • 42.

    Kingma, D. P. & Ba, J. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 (2014).

  • 43.

    Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M. & Tang, P. T. P. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. arXiv:1609.04836 (2016).

  • 44.

    Hsu, C.-W., Chang, C.-C., Lin, C.-J. et al. A practical guide to support vector classification. 1396–1400 (2003).

  • 45.

    Lerman, P. Fitting segmented regression models by grid search. J. R. Stat. Soc. Ser. C (Appl. Stat.) 29, 77–84 (1980).

    Google Scholar 

  • 46.

    Bergstra, J. & Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012).

    MathSciNet  MATH  Google Scholar 

  • 47.

    Franceschi, L., Donini, M., Frasconi, P. & Pontil, M. Forward and Reverse Gradient-Based Hyperparameter Optimization. arXiv:1703.01785 (2017).

  • 48.

    Luketina, J., Berglund, M., Greff, K. & Raiko, T. Scalable gradient-based tuning of continuous regularization hyperparameters. In International Conference on Machine Learning 2952–2960 (2016).

  • 49.

    Maclaurin, D., Duvenaud, D. & Adams, R. Gradient-based hyperparameter optimization through reversible learning. In International Conference on Machine Learning 2113–2122 (2015).

  • 50.

    Bengio, Y. Gradient-based optimization of hyperparameters. Neural Comput. 12, 1889–1900 (2000).

    CAS  Article  Google Scholar 

  • 51.

    Domke, J. Generic methods for optimization-based modeling. In Artificial Intelligence and Statistics 318–326 (2012).

  • 52.

    Bergstra, J. S., Bardenet, R., Bengio, Y. & Kégl, B. Algorithms for hyper-parameter optimization. Adv. Neural Inf. Process. Syst. 24, 2546–2554 (2011).

  • 53.

    Hutter, F., Hoos, H. H. & Leyton-Brown, K. Sequential model-based optimization for general algorithm configuration. In International Conference on Learning and Intelligent Optimization, 507–523 (Springer, 2011).

  • 54.

    Snoek, J., Larochelle, H. & Adams, R. P. Practical bayesian optimization of machine learning algorithms. Adv. Neural Inf. Process. Syst. 25, 2951–2959 (2012).

    Google Scholar 

  • 55.

    Jamieson, K. & Talwalkar, A. Non-stochastic best arm identification and hyperparameter optimization. In Artificial Intelligence and Statistics 240–248 (2016).

  • 56.

    Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A. & Talwalkar, A. Hyperband: a novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. 18, 6765–6816 (2017).

    MathSciNet  MATH  Google Scholar 

  • 57.

    Loshchilov, I. & Hutter, F. Cma-es for Hyperparameter Optimization of Deep Neural Networks. arXiv:1604.07269 (2016).

  • 58.

    Jaderberg, M. et al. Population Based Training of Neural Networks. arXiv:1711.09846 (2017).

  • 59.

    Eggensperger, K. et al. Towards an empirical foundation for assessing bayesian optimization of hyperparameters. In NIPS Workshop on Bayesian Optimization in Theory and Practice 10, 3 (2013).

  • 60.

    Hutter, F., Hoos, H. & Leyton-Brown, K. An efficient approach for assessing hyperparameter importance. In International Conference on Machine Learning 754–762 (2014).

  • 61.

    Prechelt, L. Early Stopping-but When? In Neural Networks: Tricks of the Trade 55–69 (Springer, Berlin, 1998).

    Google Scholar 


  • Source: Ecology - nature.com

    Rock magnetism uncrumples the Himalayas’ complex collision zone

    Scientists discover slimy microbes that may help keep coral reefs healthy