Machine learning, optimization, and data science : Part II. 6th International Conference, LOD 2020, Siena, Italy, July 19-23, 2020, Revised Selected Papers.

This two-volume set, LNCS 12565 and 12566, constitutes the refereed proceedings of the 6th International Conference on Machine Learning, Optimization, and Data Science, LOD 2020, held in Siena, Italy, in July 2020. The total of 116 full papers presented in this two-volume post-conference proceedings...

Full description

Saved in:
Bibliographic Details
Corporate Author: LOD (Conference) Siena, Italy)
Other Authors: Nicosia, Giuseppe, Ojha, Varun, La Malfa, Emanuele, Jansen, Giorgio, Sciacca, Vincenzo, Pardalos, P. M. (Panos M.), 1954-, Giuffrida, Giovanni, Umeton, Renato
Format: eBook
Language:English
Published: Cham : Springer, 2021.
Series:Lecture notes in computer science ; 12566.
LNCS sublibrary. Information systems and applications, incl. Internet/Web, and HCI.
Subjects:
Online Access:Click for online access
Table of Contents:
  • Intro
  • Preface
  • Organization
  • Contents
  • Part II
  • Contents
  • Part I
  • Multi-kernel Covariance Terms in Multi-output Support Vector Machines
  • 1 Introduction
  • 2 Related Work
  • 3 The Proposed Model
  • 4 Results and Discussion
  • 4.1 Jura Dataset
  • 4.2 Preliminary Results
  • 4.3 Discussion
  • 5 Conclusions
  • References
  • Generative Fourier-Based Auto-encoders: Preliminary Results
  • 1 Introduction
  • 2 Problem Formulation
  • 3 Our Framework
  • 4 Experiments
  • References
  • Parameterized Structured Pruning for Deep Neural Networks
  • 1 Introduction
  • 2 Parameterized Pruning
  • 2.1 Parameterization
  • 2.2 Regularization
  • 2.3 Pruning
  • 2.4 Hardware-Friendly Structures in CNNs
  • 3 Experiments
  • 3.1 Pruning Different Structures
  • 3.2 CIFAR10/100 and ImageNet
  • 3.3 Ablation Experiments
  • 4 Related Work
  • 5 Conclusion
  • References
  • FoodViz: Visualization of Food Entities Linked Across Different Standards
  • 1 Introduction
  • 2 Food Named-Entity Recognition
  • 3 Food Data Normalization
  • 4 FoodViz
  • 5 Conclusion
  • References
  • Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order Optimization
  • 1 Introduction
  • 2 Related Work
  • 3 Sparse Perturbations in SZO Optimization for Nonconvex Objectives
  • 4 Convergence Analysis for Sparse SZO Optimization
  • 5 Algorithms for Sparse SZO Optimization
  • 5.1 Masking Strategies
  • 5.2 Sparse Perturbations with Pruning or Freezing
  • 6 Experiments
  • 6.1 Task and Datasets
  • 6.2 Architectures and Experimental Settings
  • 6.3 Experimental Results
  • 7 Conclusion
  • A Notation
  • B Proof for Lemma1
  • C Proof for Lemma 2
  • D Proof for Lemma 3
  • E Proof for Theorem 1
  • F Miscellaneous
  • G Empirical Sparsity
  • References
  • Learning Controllers for Adaptive Spreading of Carbon Fiber Tows
  • 1 Introduction to Spreading of Carbon Fiber Tows
  • 1.1 Related Work
  • 2 The Process Model
  • A Supervised System Predictor
  • 2.1 Data Acquisition
  • 2.2 Models
  • 3 A Process Control Model
  • 4 Evaluation
  • 4.1 Process Model
  • 4.2 Process Control Model
  • 5 Conclusion
  • References
  • Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-performing Gradient Descent
  • 1 Introduction
  • 2 Ensemble Kalman Filter Optimizing Neural Networks
  • 2.1 Description and Properties of the Ensemble Kalman Filter
  • 2.2 Experimental Setup
  • 3 Numerical Results
  • 3.1 Non Evolving Gradients and Activation Values with SGD
  • 3.2 Slowly Evolving Gradients and Activation Values with ADAM
  • 3.3 Performance of the Ensemble Kalman Filter
  • 4 Conclusions and Outlook
  • References
  • Effects of Random Seeds on the Accuracy of Convolutional Neural Networks
  • 1 Introduction
  • 2 Background
  • 3 Experimental Setup
  • 4 Results and Discussion
  • 5 Conclusion
  • 6 Future Work
  • References