Advances in knowledge discovery and data mining : 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024, Taipei, Taiwan, May 7-10, 2024, Proceedings. Part III / De-Nian Yang, Xing Xie, Vincent S. Tseng, Jian Pei, Jen-Wei Huang, Jerry Chun-Wei Lin, editors.
Material type:
TextSeries: Lecture notes in computer science. Lecture notes in artificial intelligence. | Lecture notes in computer science ; 14647. | LNCS sublibrary. SL 7, Artificial intelligence.Publisher: Singapore : Springer, 2024Description: 1 online resource (xxxiv, 422 pages) : illustrations (some color)Content type: - text
- computer
- online resource
- 9789819722594
- 9819722594
- PAKDD 2024
- 006.3/12 23/eng/20240501
- QA76.9.D343
| Item type | Current library | Collection | Call number | Status | Date due | Barcode | Item holds | |
|---|---|---|---|---|---|---|---|---|
eBook
|
e-Library | eBook LNCS | Available |
The 6-volume set LNAI 14645-14650 constitutes the proceedings of the 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024, which took place in Taipei, Taiwan, during May 7-10, 2024. The 177 papers presented in these proceedings were carefully reviewed and selected from 720 submissions. They deal with new ideas, original research results, and practical development experiences from all KDD related areas, including data mining, data warehousing, machine learning, artificial intelligence, databases, statistics, knowledge engineering, big data technologies, and foundations.
Includes author index.
Online resource; title from PDF title page (SpringerLink, viewed May 1, 2024).
Intro -- General Chairs' Preface -- PC Chairs' Preface -- Organization -- Contents - Part III -- Interpretability and Explainability -- Neural Additive and Basis Models with Feature Selection and Interactions -- 1 Introduction -- 2 Generalized Additive Models (GAMs) -- 2.1 Neural Additive Model (NAM) -- 2.2 Neural Basis Model (NBM) -- 3 NAM and NBM with Feature Selection -- 3.1 Motivation -- 3.2 Model Architecture -- 3.3 Implementation Remark -- 4 Discussion of Model Complexities -- 5 Experiments -- 5.1 Experimental Settings -- 5.2 Baselines -- 5.3 Results -- 6 Conclusion -- References
Random Mask Perturbation Based Explainable Method of Graph Neural Networks -- 1 Introduction -- 2 Related Work -- 3 Problem Statement -- 4 Explainable Method -- 4.1 Node Importance Based on Fidelity -- 4.2 Explanation Sparsity -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Quantitative Experiments -- 5.3 Ablation Study -- 5.4 Use Case -- 6 Conclusion -- References -- RouteExplainer: An Explanation Framework for Vehicle Routing Problem -- 1 Introduction -- 2 Related Work -- 3 Proposed Framework: RouteExplainer -- 3.1 Many-to-Many Edge Classifier -- 3.2 Counterfactual Explanation for VRP
4 Experiments -- 4.1 Quantitative Evaluation of the Edge Classifier -- 4.2 Qualitative Evaluation of Generated Explanations -- 5 Conclusion and Future Work -- References -- On the Efficient Explanation of Outlier Detection Ensembles Through Shapley Values -- 1 Introduction -- 2 Related Work -- 3 Outlier Detection Ensembles -- 4 The bagged Shapley Values -- 5 Theoretical Guarantees for the Approximation -- 6 Experiments -- 6.1 Quality of the Approximation -- 6.2 Effectiveness -- 6.3 Scalability -- 7 Conclusions -- References -- Interpreting Pretrained Language Models via Concept Bottlenecks
1 Introduction -- 2 Related Work -- 2.1 Interpreting Pretrained Language Models -- 2.2 Learning from Noisy Labels -- 3 Enable Concept Bottlenecks for PLMs -- 3.1 Problem Setup -- 4 C3M: A General Framework for Learning CBE-PLMs -- 4.1 ChatGPT-Guided Concept Augmentation -- 4.2 Learning from Noisy Concept Labels -- 5 Experiments -- 6 Conclusion -- A Definitions of Training Strategies -- B Details of the Manual Concept Annotation for the IMDB Dataset -- C Implementation Detail -- D Parameters and Notations -- E Statistics of Data Splits -- F Statistics of Concepts in Transformed Datasets
G More Results on Explainable Predictions -- H A Case Study on Test-Time Intervention -- I Examples of Querying ChatGPT -- References -- Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach to Model Interpretability and Precision -- 1 Introduction -- 2 Related Work -- 3 Methods -- 3.1 Jacobian Saliency Map (JSM) -- 3.2 Jacobian-Augmented Loss Function (JAL) -- 4 Experiments -- 4.1 Dataset -- 4.2 Preprocessing -- 4.3 Multimodal Classification -- 4.4 Performance Evaluation -- 5 Conclusion -- References -- Towards Nonparametric Topological Layers in Neural Networks