Leonardis / Ricci / Varol | Computer Vision - ECCV 2024 | Buch | 978-3-031-72912-6 | sack.de

Buch, Englisch, Band 15095, 507 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 890 g

Reihe: Lecture Notes in Computer Science

Leonardis / Ricci / Varol

Computer Vision - ECCV 2024

18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XXXVII
2025
ISBN: 978-3-031-72912-6
Verlag: Springer Nature Switzerland

18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XXXVII

Buch, Englisch, Band 15095, 507 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 890 g

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-031-72912-6
Verlag: Springer Nature Switzerland


The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29–October 4, 2024.

The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.

Leonardis / Ricci / Varol Computer Vision - ECCV 2024 jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval.- Synchronization of Projective Transformations.- TLControl: Trajectory and Language Control for Human Motion Synthesis.- Insect Identification in the Wild: The AMI Dataset.- Cross-view image geo-localization with Panorama-BEV Co-Retrieval Network.- F-HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions.- Test-time Model Adaptation for Image Reconstruction Using Self-supervised Adaptive Layers.- SHIC: Shape-Image Correspondences with no Keypoint Supervision.- GenRC: Generative 3D Room Completion from Sparse Image Collections.- A Probability-guided Sampler for Neural Implicit Surface Rendering.- ReMatching: Low-Resolution Representations for Scalable Shape Correspondence.- Where am I? Scene Retrieval with Language.- This Probably Looks Exactly Like That: An Invertible Prototypical Network.- Arc2Face: A Foundation Model for ID-Consistent Human Faces.- PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations.- Revisiting Feature Disentanglement Strategy in Diffusion Training and Breaking Conditional Independence Assumption in Sampling.- SweepNet: Unsupervised Learning Shape Abstraction via Neural Sweepers.- Leveraging Thermal Modality to Enhance Reconstruction in Low-Light Conditions.- On the Viability of Monocular Depth Pre-training for Semantic Segmentation.- Fairness-aware Vision Transformer via Debiased Self-Attention.- EgoPet: Egomotion and Interaction Data from an Animal's Perspective.- Deep Companion Learning: Enhancing Generalization Through Historical Consistency.- Neural graphics texture compression supporting random access.- Contrastive Learning with Synthetic Positives.- GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features.- Interpretability-Guided Test-Time Adversarial Defense.- DIM: Dyadic Interaction Modeling for Social Behavior Generation.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.