Computer vision - ACCV 2020 : 15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 - December 4, 2020 : revised selected papers. Part IV / Hiroshi Ishikawa, Cheng-Lin Liu, Tomas Pajdla, Jianbo Shi (eds.).
Material type:
TextSeries: Lecture notes in computer science ; 12625. | LNCS sublibrary. SL 6, Image processing, computer vision, pattern recognition, and graphics.Publisher: Cham : Springer, [2021]Description: 1 online resource (xxviii, 715 pages) : illustrations (chiefly color)Content type: - text
- computer
- online resource
- 9783030695385
- 3030695387
- 3030695379
- 9783030695378
- 9783030695392
- 3030695395
- ACCV 2020
- Computer vision -- Congresses
- Optical data processing
- Artificial intelligence
- Computers
- Pattern perception
- Application software
- Artificial Intelligence
- Computers
- Vision par ordinateur -- Congrès
- Traitement optique de l'information
- Intelligence artificielle
- Ordinateurs
- Perception des structures
- Logiciels d'application
- artificial intelligence
- computers
- Application software
- Artificial intelligence
- Computer vision
- Computers
- Optical data processing
- Pattern perception
- 006.3/7 23
- TA1634
| Item type | Current library | Collection | Call number | Status | Date due | Barcode | Item holds | |
|---|---|---|---|---|---|---|---|---|
eBook
|
e-Library | eBook LNCS | Available |
International conference proceedings.
Includes author index.
The six volume set of LNCS 12622-12627 constitutes the proceedings of the 15th Asian Conference on Computer Vision, ACCV 2020, held in Kyoto, Japan, in November/ December 2020.* The total of 254 contributions was carefully reviewed and selected from 768 submissions during two rounds of reviewing and improvement. The papers focus on the following topics: Part I: 3D computer vision; segmentation and grouping Part II: low-level vision, image processing; motion and tracking Part III: recognition and detection; optimization, statistical methods, and learning; robot vision Part IV: deep learning for computer vision, generative models for computer vision Part V: face, pose, action, and gesture; video analysis and event recognition; biomedical image analysis Part VI: applications of computer vision; vision for X; datasets and performance analysis *The conference was held virtually.
Deep Learning for Computer Vision -- In-sample Contrastive Learning and Consistent Attention for Weakly Supervised Object Localization -- Exploiting Transferable Knowledge for Fairness-aware Image Classification -- Introspective Learning by Distilling Knowledge from Online Self-explanation -- Hyperparameter-Free Out-of-Distribution Detection Using Cosine Similarity -- Meta-Learning with Context-Agnostic Initialisations -- Second Order enhanced Multi-glimpse Attention in Visual Question Answering -- Localize to Classify and Classify to Localize: Mutual Guidance in Object Detection -- Unified Density-Aware Image Dehazing and Object Detection in Real-World Hazy Scenes -- Part-aware Attention Network for Person Re-Identification -- Image Captioning through Image Transformer -- Feature Variance Ratio-Guided Channel Pruning for Deep Convolutional Network Acceleration -- Learn more, forget less: Cues from human brain -- Knowledge Transfer Graph for Deep Collaborative Learning -- ^Regularizing Meta-Learning via Gradient Dropout -- Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks -- Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed -- Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation -- Double Targeted Universal Adversarial Perturbations -- Adversarially Robust Deep Image Super-Resolution using Entropy Regularization -- Online Knowledge Distillation via Multi-branch Diversity Enhancement -- Rotation Equivariant Orientation Estimation for Omnidirectional Localization -- Contextual Semantic Interpretability -- Few-Shot Object Detection by Second-order Pooling -- Depth-Adapted CNN for RGB-D cameras -- Generative Models for Computer Vision -- Over-exposure Correction via Exposure and Scene Information Disentanglement -- Novel-View Human Action Synthesis -- Augmentation Network for Generalised Zero-Shot Learning -- Local Facial Makeup Transfer via Disentangled Representation -- ^OpenGAN: Open Set Generative Adversarial Networks -- CPTNet: Cascade Pose Transform Network for Single Image Talking Head Animation -- TinyGAN: Distilling BigGAN for Conditional Image Generation -- A cost-effective method for improving and re-purposing large, pre-trained GANs by fine-tuning their class-embeddings -- RF-GAN: A Light and Reconfigurable Network for Unpaired Image-to-Image Translation -- GAN-based Noise Model for Denoising Real Images -- Emotional Landscape Image Generation Using Generative Adversarial Networks -- Feedback Recurrent Autoencoder for Video Compression -- MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative Adversarial Network -- DeepSEE: Deep Disentangled Semantic Explorative Extreme Super-Resolution -- dpVAEs: Fixing Sample Generation for Regularized VAEs -- MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network -- EvolGAN: Evolutionary Generative Adversarial Networks -- ^Sequential View Synthesis with Transformer.
Online resource; title from PDF title page (SpringerLink, viewed March 23, 2021).
Access restricted to registered UOB users with valid accounts.