Perception and interactive technologies : International Tutorial and Research Workshop, PIT 2006, Kloster Irsee, Germany, June 19-21, 2006 : proceedings / Elisabeth André [and others] (eds.).
Material type:
TextSeries: LNCS sublibrary. SL 7, Artificial intelligence. | Lecture notes in computer science ; 4021. | Lecture notes in computer science. Lecture notes in artificial intelligence.Publication details: Berlin ; New York : Springer, 2006.Description: 1 online resource (xi, 216 pages) : illustrationsContent type: - text
- computer
- online resource
- 9783540347446
- 3540347445
- 3540347437
- 9783540347439
- PIT 2006
- Interactive computer systems -- Congresses
- Human-computer interaction -- Congresses
- User interfaces (Computer systems) -- Congresses
- Interaction personne-ordinateur -- Congrès
- Interfaces utilisateurs (Informatique) -- Congrès
- Systèmes conversationnels (Informatique) -- Congrès
- COMPUTERS -- Digital Media -- General
- COMPUTERS -- Web -- User Generated Content
- COMPUTERS -- Interactive & Multimedia
- COMPUTERS -- Web -- Site Design
- Informatique
- Human-computer interaction
- Interactive computer systems
- User interfaces (Computer systems)
- Perception technologies
- Interactive technologies
- PIT
- beeldverwerking
- image processing
- machine vision
- computers
- computerwetenschappen
- computer sciences
- kunstmatige intelligentie
- artificial intelligence
- man-machine interaction
- gebruikersinterfaces
- user interfaces
- Information and Communication Technology (General)
- Informatie- en communicatietechnologie (algemeen)
- 006.7 22
- QA76.9.I58 T86 2006eb
| Item type | Current library | Collection | Call number | Status | Date due | Barcode | Item holds | |
|---|---|---|---|---|---|---|---|---|
eBook
|
e-Library | eBook LNCS | Available |
Includes bibliographical references and index.
Print version record.
Head Pose and Eye Gaze Tracking -- Guiding Eye Movements for Better Communication and Augmented Vision -- Detection of Head Pose and Gaze Direction for Human-Computer Interaction -- Modelling and Simulation of Perception -- Modelling and Simulation of Spontaneous Perception Switching with Ambiguous Visual Stimuli in Augmented Vision Systems -- Neural Network Architecture for Modeling the Joint Visual Perception of Orientation, Motion, and Depth -- Integrating Information from Multiple Channels -- AutoSelect: What You Want Is What You Get: Real-Time Processing of Visual Attention and Affect -- Emotion Recognition Using Physiological and Speech Signal in Short-Term Observation -- Visual and Auditory Displays Driven by Perceptive Principles -- Visual Attention in Auditory Display -- A Perceptually Optimized Scheme for Visualizing Gene Expression Ratios with Confidence Values -- Spoken Dialogue Systems -- Combining Speech User Interfaces of Different Applications -- Learning and Forgetting of Speech Commands in Automotive Environments -- Help Strategies for Speech Dialogue Systems in Automotive Environments -- Multimodal and Situated Dialogue Systems -- Information Fusion for Visual Reference Resolution in Dynamic Situated Dialogue -- Speech and 2D Deictic Gesture Reference to Virtual Scenes -- Combining Modality Theory and Context Models -- Integration of Perceptive Technologies and Animation -- Visual Interaction in Natural Human-Machine Dialogue -- Multimodal Sensing, Interpretation and Copying of Movements by a Virtual Agent -- Poster Session -- Perception of Dynamic Facial Expressions of Emotion -- Multi-level Face Tracking for Estimating Human Head Orientation in Video Sequences -- The Effect of Prosodic Features on the Interpretation of Synthesised Backchannels -- Unsupervised Learning of Spatio-temporal Primitives of Emotional Gait -- System Demonstrations -- Talking with Higgins: Research Challenges in a Spoken Dialogue System -- Location-Based Interaction with Children for Edutainment -- An Immersive Game -- Augsburg Cityrun -- Gaze-Contingent Spatio-temporal Filtering in a Head-Mounted Display -- A Single-Camera Remote Eye Tracker -- Miniature 3D TOF Camera for Real-Time Imaging.