Machine learning and knowledge discovery in databases : Part II / Research track : European Conference, ECML PKDD 2021, Bilbao, Spain, September 13-17, 2021, Proceedings. Nuria Oliver, Fernando Pérez-Cruz, Stefan Kramer, Jesse Read, Jose A. Lozano (eds.).

The multi-volume set LNAI 12975 until 12979 constitutes the refereed proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2021, which was held during September 13-17, 2021. The conference was originally planned to take place in Bilbao, Spain, but...

Full description

Saved in:
Bibliographic Details
Corporate Author: ECML PKDD (Conference) Online)
Other Authors: Oliver, Nuria, 1970- (Editor), Pérez-Cruz, Fernando (Editor), Kramer, Stefan, Prof. Dr (Editor), Read, Jesse (Editor), Lozano, José A., 1968- (Editor)
Format: eBook
Language:English
Published: Cham, Switzerland : Springer, 2021.
Series:Lecture notes in computer science. Lecture notes in artificial intelligence.
Lecture notes in computer science ; 12976.
LNCS sublibrary. Artificial intelligence.
Subjects:
Online Access:Click for online access
Table of Contents:
  • Intro
  • Preface
  • Organization
  • Contents
  • Part II
  • Generative Models
  • Non-exhaustive Learning Using Gaussian Mixture Generative Adversarial Networks
  • 1 Introduction
  • 2 Related Work
  • 3 Background
  • 4 Methodology
  • 5 Experiments
  • 6 Conclusion
  • References
  • Unsupervised Learning of Joint Embeddings for Node Representation and Community Detection
  • 1 Introduction
  • 2 Related Work
  • 2.1 Community Detection
  • 2.2 Node Representation Learning
  • 2.3 Joint Community Detection and Node Representation Learning
  • 3 Methodology
  • 3.1 Problem Formulation
  • 3.2 Variational Model
  • 3.3 Design Choices
  • 3.4 Practical Aspects
  • 3.5 Complexity
  • 4 Experiments
  • 4.1 Synthetic Example
  • 4.2 Datasets
  • 4.3 Baselines
  • 4.4 Settings
  • 4.5 Discussion of Results
  • 4.6 Hyperparameter Sensitivity
  • 4.7 Training Time
  • 4.8 Visualization
  • 5 Conclusion
  • References
  • GraphAnoGAN: Detecting Anomalous Snapshots from Attributed Graphs
  • 1 Introduction
  • 2 Related Work
  • 3 Problem Definition
  • 4 Proposed Algorithm
  • 4.1 GAN Modeling
  • 4.2 Architecture
  • 4.3 Training Procedure
  • 5 Datasets
  • 6 Experiments
  • 6.1 Baselines
  • 6.2 Comparative Evaluation
  • 6.3 Side-by-Side Diagnostics
  • 7 Conclusion
  • References
  • The Bures Metric for Generative Adversarial Networks
  • 1 Introduction
  • 2 Method
  • 3 Empirical Evaluation of Mode Collapse
  • 3.1 Artificial Data
  • 3.2 Real Images
  • 4 High Quality Generation Using a ResNet Architecture
  • 5 Conclusion
  • References
  • Generative Max-Mahalanobis Classifiers for Image Classification, Generation and More
  • 1 Introduction
  • 2 Background and Related Work
  • 2.1 Energy-Based Models
  • 2.2 Alternatives to the Softmax Classifier
  • 3 Methodology
  • 3.1 Approach 1: Discriminative Training
  • 3.2 Approach 2: Generative Training
  • 3.3 Approach 3: Joint Training
  • 3.4 GMMC for Inference
  • 4 Experiments
  • 4.1 Hybrid Modeling
  • 4.2 Calibration
  • 4.3 Out-Of-Distribution Detection
  • 4.4 Robustness
  • 4.5 Training Stability
  • 4.6 Joint Training
  • 5 Conclusion and Future Work
  • References
  • Gaussian Process Encoders: VAEs with Reliable Latent-Space Uncertainty
  • 1 Introduction
  • 1.1 Contributions
  • 2 Background
  • 2.1 Variational Autoencoder
  • 2.2 Latent Variance Estimates of NN
  • 2.3 Mismatch Between the Prior and Approximate Posterior
  • 3 Methodology
  • 3.1 Gaussian Process Encoder
  • 3.2 The Implications of a Gaussian Process Encoder
  • 3.3 Out-of-Distribution Detection
  • 4 Experiments
  • 4.1 Log Likelihood
  • 4.2 Uncertainty in the Latent Space
  • 4.3 Benchmarking OOD Detection
  • 4.4 OOD Polution of the Training Data
  • 4.5 Synthesizing Variants of Input Data
  • 4.6 Interpretable Kernels
  • 5 Related Work
  • 6 Conclusion
  • References
  • Variational Hyper-encoding Networks
  • 1 Introduction
  • 2 Variational Autoencoder (VAE)
  • 3 Variational Hyper-encoding Networks