AI verification : first International Symposium, SAIV 2024, Montreal, QC, Canada, July 22-23, 2024, proceedings / Guy Avni, Mirco Giacobbe, Taylor T. Johnson, Guy Katz, Anna Lukina, Nina Narodytska, Christian Schilling, editors.
Material type:
TextSeries: Lecture notes in computer science ; 14846.Publisher: Cham : Springer, 2024Description: 1 online resource (ix, 189 pages) : illustrations (some color)Content type: - text
- computer
- online resource
- 9783031651120
- 303165112X
- SAIV 2024
- 006.3 23/eng/20240724
- Q334
| Item type | Current library | Collection | Call number | Status | Date due | Barcode | Item holds | |
|---|---|---|---|---|---|---|---|---|
eBook
|
e-Library | eBook LNCS | Available |
This LNCS volume constitutes the proceedings of the First International Symposium on AI Verification, SAIV 2024, in Montreal, QC, Canada, during July 2024. The scope of the topics was broadly categorized into two groups. The first group, formal methods for artificial intelligence, comprised: formal specifications for systems with AI components; formal methods for analyzing systems with AI components; formal synthesis methods of AI components; testing approaches for systems with AI components; statistical approaches for analyzing systems with AI components; and approaches for enhancing the explainability of systems with AI components. The second group, artificial intelligence for formal methods, comprised: AI methods for formal verification; AI methods for formal synthesis; AI methods for safe control; and AI methods for falsification.
Includes author index.
Online resource; title from PDF title page (SpringerLink, viewed July 24, 2024).
Intro -- Preface -- Organization -- Contents -- Chronosymbolic Learning: Efficient CHC Solving with Symbolic Reasoning and Inductive Learning -- 1 Introduction -- 2 Preliminaries -- 2.1 Constrained Horn Clauses -- 2.2 CHC Solving as a Symbolic Classification Problem -- 3 Chronosymbolic Learning -- 3.1 Architecture of Chronosymbolic Learning -- 3.2 Samples and Zones -- 3.3 Incorporate Zones Within Learning Iterations -- 3.4 Overall Algorithm -- 4 Design of Learner and Reasoner -- 4.1 Learner: Data-Driven CHC Solving -- 4.2 Reasoner: Zones Discovery -- 5 Experiments -- 5.1 Experimental Settings
5.2 Baselines -- 5.3 Performance Evaluation -- 5.4 Ablation Study -- 6 Related Work -- 7 Discussion and Future Work -- 8 Conclusion -- A Extended Preliminary -- A.1 Extended Related Work -- A.2 CHCs and Program Safety Verification -- A.3 Support Vector Machine -- A.4 Decision Tree -- B Solution Space Analysis -- C Implementation Details -- C.1 Learner: Data-Driven CHC Solving -- C.2 Reasoner: Zones Discovery -- C.3 Preprocessing of CHC Systems -- D Experimental Details -- D.1 Statistics About the Benchmarks -- D.2 Detailed Settings for Chronosymbolic-cover
D.3 Results on Using Different Random Seeds in DT -- D.4 Running Time Analysis -- References -- Error Analysis of Shapley Value-Based Model Explanations: An Informative Perspective -- 1 Introduction -- 2 Background -- 2.1 Notation -- 2.2 Informative SVA -- 3 Observation Bias and Structural Bias Trade-Off -- 3.1 Overfitting and Underfitting of the RF -- 3.2 Explanation Error Decomposition -- 3.3 Over-Informative Explanation -- 3.4 Under-Informative Explanation -- 3.5 Explanation Error Analysis of Data-Smoothing Methods -- 3.6 Explanation Error Analysis of Distributional Assumptions-Based Methods
4 OOD Measurement of Distribution Drift -- 4.1 Distribution Drift and OOD Detection -- 5 Experiments -- 5.1 Distribution Drift Detection -- 5.2 Under-Informativeness Audit -- 5.3 Over-Informativeness Audit -- 6 Conclusions -- References -- Concept-Based Analysis of Neural Networks via Vision-Language Models -- 1 Introduction -- 2 Preliminaries -- 3 Conspec Specification Language -- 3.1 Syntax -- 3.2 Semantics -- 4 Vision-Language Models as Analysis Tools -- 5 Case Study -- 5.1 Dataset, Concepts, and Strength Predicates -- 5.2 Models -- 5.3 Extraction of Concept Representations
5.4 Statistical Validation of rep via Strength Predicates -- 5.5 Verification -- 6 Related Work -- 7 Conclusion -- A Proofs -- B Captions Used for Computing CLIP Text Embeddings -- C Verification of CLIP -- References -- Parallel Verification for -Equivalence of Neural Network Quantization -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Quantized Neural Networks (QNNs) -- 4 -Equivalence -- 5 MILP Encoding -- 6 Symbolic Interval Analysis -- 7 Parallelization -- 7.1 Process Management -- 7.2 Partitioning Strategies -- 8 Experiments -- 9 Efficiency -- 10 Verification Results