Amazon cover image
Image from Amazon.com

Structured Representation Learning : From Homomorphisms and Disentanglement to Equivariance and Topography / by Yue Song, Thomas Anderson Keller, Nicu Sebe, Max Welling

By: Contributor(s): Material type: TextTextLanguage: English Series: Synthesis Lectures on Computer VisionPublisher: Cham Springer Nature Switzerland 2026Publisher: Cham Imprint: Springer 2026Edition: 1st ed. 2026ISBN:
  • 9783031881107
Subject(s): Additional physical formats: No title; No title; No title; Erscheint auch als: No title; Erscheint auch als: No title; Erscheint auch als: No titleDDC classification:
  • 006.31 23
Summary: Introduction -- Background -- Topographical Variational AutoEncoders -- Neural Wave Machines -- Latent Traversal as Potential Flows -- Flow Factorized Representation Learning -- Unsupervised Factorzied Representation Learning through SparseTransformation Analysis -- Conclusion.Summary: This book introduces approaches to generalize the benefits of equivariant deep learning to a broader set of learned structures through learned homomorphisms. In the field of machine learning, the idea of incorporating knowledge of data symmetries into artificial neural networks is known as equivariant deep learning and has led to the development of cutting edge architectures for image and physical data processing. The power of these models originates from data-specific structures ingrained in them through careful engineering. To-date however, the ability for practitioners to build such a structure into models is limited to situations where the data must exactly obey specific mathematical symmetries. The authors discuss naturally inspired inductive biases, specifically those which may provide types of efficiency and generalization benefits through what are known as homomorphic representations, a new general type of structured representation inspired from techniques in physics and neuroscience. A review of some of the first attempts at building models with learned homomorphic representations are introduced. The authors demonstrate that these inductive biases improve the ability of models to represent natural transformations and ultimately pave the way to the future of efficient and effective artificial neural networks. In addition, this book; Offers a novel definition of generalized equivariance and a literature review on learned homomorphisms Provide clarity as to the unifying goals of the newly emerging subfield of learned approximate symmetries Emphasizes naturally intelligent systems have generalization capabilities and data efficiency beyond artificial models.
List(s) this item appears in: New Arrivals October 2025

Introduction -- Background -- Topographical Variational AutoEncoders -- Neural Wave Machines -- Latent Traversal as Potential Flows -- Flow Factorized Representation Learning -- Unsupervised Factorzied Representation Learning through SparseTransformation Analysis -- Conclusion.

This book introduces approaches to generalize the benefits of equivariant deep learning to a broader set of learned structures through learned homomorphisms. In the field of machine learning, the idea of incorporating knowledge of data symmetries into artificial neural networks is known as equivariant deep learning and has led to the development of cutting edge architectures for image and physical data processing. The power of these models originates from data-specific structures ingrained in them through careful engineering. To-date however, the ability for practitioners to build such a structure into models is limited to situations where the data must exactly obey specific mathematical symmetries. The authors discuss naturally inspired inductive biases, specifically those which may provide types of efficiency and generalization benefits through what are known as homomorphic representations, a new general type of structured representation inspired from techniques in physics and neuroscience. A review of some of the first attempts at building models with learned homomorphic representations are introduced. The authors demonstrate that these inductive biases improve the ability of models to represent natural transformations and ultimately pave the way to the future of efficient and effective artificial neural networks. In addition, this book; Offers a novel definition of generalized equivariance and a literature review on learned homomorphisms Provide clarity as to the unifying goals of the newly emerging subfield of learned approximate symmetries Emphasizes naturally intelligent systems have generalization capabilities and data efficiency beyond artificial models.

Powered by Koha